Jan 29 06:35:12 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 06:35:12 crc restorecon[4773]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:12 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:35:13 crc restorecon[4773]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 06:35:14 crc kubenswrapper[5017]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 06:35:14 crc kubenswrapper[5017]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 06:35:14 crc kubenswrapper[5017]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 06:35:14 crc kubenswrapper[5017]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 06:35:14 crc kubenswrapper[5017]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 06:35:14 crc kubenswrapper[5017]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.055446 5017 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066652 5017 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066699 5017 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066709 5017 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066720 5017 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066730 5017 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066741 5017 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066750 5017 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066759 5017 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066768 5017 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066777 5017 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066786 5017 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066795 5017 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066803 5017 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066811 5017 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066818 5017 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066827 5017 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066836 5017 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066847 5017 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066858 5017 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066867 5017 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066876 5017 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.066990 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067001 5017 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067010 5017 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067019 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067028 5017 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067037 5017 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067046 5017 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067054 5017 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067062 5017 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067070 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067078 5017 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067086 5017 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067109 5017 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067133 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067141 5017 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067149 5017 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067157 5017 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067167 5017 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067175 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067186 5017 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067195 5017 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067204 5017 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067212 5017 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067221 5017 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067228 5017 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067236 5017 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067244 5017 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067252 5017 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067259 5017 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067267 5017 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067276 5017 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067286 5017 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067294 5017 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067302 5017 feature_gate.go:330] unrecognized feature gate: Example Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067311 5017 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067319 5017 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067327 5017 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067335 5017 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067343 5017 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067351 5017 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067360 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067368 5017 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067379 5017 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067424 5017 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067436 5017 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067446 5017 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067454 5017 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067463 5017 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067513 5017 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.067522 5017 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067676 5017 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067696 5017 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067712 5017 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067725 5017 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067737 5017 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067747 5017 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067760 5017 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067773 5017 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067782 5017 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067791 5017 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067801 5017 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067814 5017 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067824 5017 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067833 5017 flags.go:64] FLAG: --cgroup-root="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067842 5017 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067851 5017 flags.go:64] FLAG: --client-ca-file="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067860 5017 flags.go:64] FLAG: --cloud-config="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067870 5017 flags.go:64] FLAG: --cloud-provider="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067879 5017 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067891 5017 flags.go:64] FLAG: --cluster-domain="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067900 5017 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067909 5017 flags.go:64] FLAG: --config-dir="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067918 5017 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067928 5017 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067941 5017 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067950 5017 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.067991 5017 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068004 5017 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068016 5017 flags.go:64] FLAG: --contention-profiling="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068027 5017 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068038 5017 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068049 5017 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068060 5017 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068076 5017 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068087 5017 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068098 5017 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068106 5017 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068115 5017 flags.go:64] FLAG: --enable-server="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068124 5017 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068137 5017 flags.go:64] FLAG: --event-burst="100" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068146 5017 flags.go:64] FLAG: --event-qps="50" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068155 5017 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068164 5017 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068173 5017 flags.go:64] FLAG: --eviction-hard="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068185 5017 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068194 5017 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068203 5017 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068214 5017 flags.go:64] FLAG: --eviction-soft="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068224 5017 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068236 5017 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068245 5017 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068255 5017 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068263 5017 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068273 5017 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068282 5017 flags.go:64] FLAG: --feature-gates="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068294 5017 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068303 5017 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068314 5017 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068324 5017 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068333 5017 flags.go:64] FLAG: --healthz-port="10248" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068343 5017 flags.go:64] FLAG: --help="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068352 5017 flags.go:64] FLAG: --hostname-override="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068361 5017 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068370 5017 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068379 5017 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068388 5017 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068426 5017 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068437 5017 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068447 5017 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068456 5017 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068465 5017 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068475 5017 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068485 5017 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068494 5017 flags.go:64] FLAG: --kube-reserved="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068503 5017 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068512 5017 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068521 5017 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068530 5017 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068539 5017 flags.go:64] FLAG: --lock-file="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068548 5017 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068557 5017 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068567 5017 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068581 5017 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068592 5017 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068601 5017 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068611 5017 flags.go:64] FLAG: --logging-format="text" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068622 5017 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068635 5017 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068646 5017 flags.go:64] FLAG: --manifest-url="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068657 5017 flags.go:64] FLAG: --manifest-url-header="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068673 5017 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068685 5017 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068700 5017 flags.go:64] FLAG: --max-pods="110" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068712 5017 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068727 5017 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068739 5017 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068750 5017 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068759 5017 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068768 5017 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068778 5017 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068799 5017 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068809 5017 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068819 5017 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068828 5017 flags.go:64] FLAG: --pod-cidr="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068837 5017 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068851 5017 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068863 5017 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068875 5017 flags.go:64] FLAG: --pods-per-core="0" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068886 5017 flags.go:64] FLAG: --port="10250" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068897 5017 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068909 5017 flags.go:64] FLAG: --provider-id="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068920 5017 flags.go:64] FLAG: --qos-reserved="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068934 5017 flags.go:64] FLAG: --read-only-port="10255" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068946 5017 flags.go:64] FLAG: --register-node="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068988 5017 flags.go:64] FLAG: --register-schedulable="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.068998 5017 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069018 5017 flags.go:64] FLAG: --registry-burst="10" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069029 5017 flags.go:64] FLAG: --registry-qps="5" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069041 5017 flags.go:64] FLAG: --reserved-cpus="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069070 5017 flags.go:64] FLAG: --reserved-memory="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069105 5017 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069117 5017 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069130 5017 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069142 5017 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069153 5017 flags.go:64] FLAG: --runonce="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069165 5017 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069178 5017 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069191 5017 flags.go:64] FLAG: --seccomp-default="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069202 5017 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069214 5017 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069227 5017 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069239 5017 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069252 5017 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069264 5017 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069275 5017 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069285 5017 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069294 5017 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069305 5017 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069314 5017 flags.go:64] FLAG: --system-cgroups="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069323 5017 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069338 5017 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069348 5017 flags.go:64] FLAG: --tls-cert-file="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069357 5017 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069371 5017 flags.go:64] FLAG: --tls-min-version="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069380 5017 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069389 5017 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069398 5017 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069407 5017 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069418 5017 flags.go:64] FLAG: --v="2" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069430 5017 flags.go:64] FLAG: --version="false" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069449 5017 flags.go:64] FLAG: --vmodule="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069461 5017 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.069471 5017 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069685 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069695 5017 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069704 5017 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069712 5017 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069720 5017 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069730 5017 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069738 5017 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069747 5017 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069754 5017 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069762 5017 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069770 5017 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069778 5017 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069786 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069797 5017 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069807 5017 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069817 5017 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069828 5017 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069837 5017 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069848 5017 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069858 5017 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069869 5017 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069880 5017 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069891 5017 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069903 5017 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069914 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069923 5017 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069934 5017 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069944 5017 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069989 5017 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.069999 5017 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070007 5017 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070015 5017 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070023 5017 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070031 5017 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070039 5017 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070047 5017 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070058 5017 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070069 5017 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070080 5017 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070088 5017 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070097 5017 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070106 5017 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070114 5017 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070122 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070130 5017 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070138 5017 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070147 5017 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070155 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070163 5017 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070170 5017 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070178 5017 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070186 5017 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070195 5017 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070202 5017 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070210 5017 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070218 5017 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070226 5017 feature_gate.go:330] unrecognized feature gate: Example Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070234 5017 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070242 5017 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070250 5017 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070257 5017 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070265 5017 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070275 5017 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070285 5017 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070295 5017 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070304 5017 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070318 5017 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070331 5017 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070342 5017 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070362 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.070372 5017 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.070388 5017 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.083198 5017 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.083270 5017 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083460 5017 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083497 5017 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083505 5017 feature_gate.go:330] unrecognized feature gate: Example Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083511 5017 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083517 5017 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083522 5017 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083527 5017 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083533 5017 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083538 5017 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083543 5017 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083549 5017 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083554 5017 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083580 5017 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083586 5017 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083591 5017 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083596 5017 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083602 5017 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083607 5017 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083613 5017 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083618 5017 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083623 5017 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083629 5017 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083634 5017 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083639 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083667 5017 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083673 5017 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083678 5017 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083683 5017 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083690 5017 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083698 5017 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083704 5017 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083710 5017 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083716 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083721 5017 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083730 5017 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083736 5017 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083742 5017 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083747 5017 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083775 5017 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083781 5017 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083787 5017 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083792 5017 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083798 5017 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083803 5017 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083809 5017 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083814 5017 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083820 5017 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083827 5017 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083834 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083839 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083845 5017 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083850 5017 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083856 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083861 5017 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083866 5017 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083872 5017 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083878 5017 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083883 5017 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083889 5017 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083911 5017 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083917 5017 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083924 5017 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083930 5017 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083935 5017 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083941 5017 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083948 5017 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083967 5017 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083973 5017 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083980 5017 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083987 5017 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.083995 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.084005 5017 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084231 5017 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084244 5017 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084251 5017 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084257 5017 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084263 5017 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084269 5017 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084274 5017 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084282 5017 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084288 5017 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084294 5017 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084299 5017 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084305 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084312 5017 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084318 5017 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084324 5017 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084330 5017 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084336 5017 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084342 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084347 5017 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084353 5017 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084358 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084363 5017 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084370 5017 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084375 5017 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084381 5017 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084387 5017 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084392 5017 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084397 5017 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084402 5017 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084408 5017 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084413 5017 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084418 5017 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084424 5017 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084429 5017 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084436 5017 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084441 5017 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084446 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084451 5017 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084457 5017 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084462 5017 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084468 5017 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084473 5017 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084478 5017 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084483 5017 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084488 5017 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084493 5017 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084499 5017 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084504 5017 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084528 5017 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084534 5017 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084541 5017 feature_gate.go:330] unrecognized feature gate: Example Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084548 5017 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084556 5017 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084563 5017 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084570 5017 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084575 5017 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084581 5017 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084586 5017 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084591 5017 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084596 5017 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084601 5017 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084606 5017 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084611 5017 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084616 5017 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084623 5017 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084629 5017 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084635 5017 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084641 5017 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084646 5017 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084653 5017 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.084661 5017 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.084672 5017 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.085873 5017 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.090910 5017 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.091073 5017 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.092770 5017 server.go:997] "Starting client certificate rotation" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.092807 5017 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.093939 5017 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-13 02:57:18.153067289 +0000 UTC Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.094086 5017 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.118334 5017 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.121463 5017 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.121547 5017 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.140573 5017 log.go:25] "Validated CRI v1 runtime API" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.182000 5017 log.go:25] "Validated CRI v1 image API" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.185127 5017 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.190853 5017 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-06-29-44-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.190931 5017 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.223726 5017 manager.go:217] Machine: {Timestamp:2026-01-29 06:35:14.219228556 +0000 UTC m=+0.593676196 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6f20d4ce-100b-4db6-aef5-b7c5d1dcba49 BootID:b652416b-993f-4447-94ce-fb2ce8447cbe Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c5:33:5f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c5:33:5f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:15:27:e8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f1:90:c7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a3:8e:a9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b8:65:59 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:43:6c:eb Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a2:5d:12:59:d3:f2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ba:b4:b4:90:3e:8a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.224161 5017 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.224378 5017 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.226219 5017 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.226721 5017 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.226803 5017 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.227355 5017 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.227377 5017 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.228060 5017 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.228138 5017 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.229027 5017 state_mem.go:36] "Initialized new in-memory state store" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.229163 5017 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.232786 5017 kubelet.go:418] "Attempting to sync node with API server" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.232817 5017 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.232849 5017 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.232871 5017 kubelet.go:324] "Adding apiserver pod source" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.232888 5017 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.238295 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.238538 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.238518 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.238656 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.239044 5017 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.240252 5017 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.242647 5017 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244213 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244249 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244259 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244269 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244286 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244295 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244304 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244319 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244330 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244340 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244353 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244363 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244389 5017 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.244884 5017 server.go:1280] "Started kubelet" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.245220 5017 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.247173 5017 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.247092 5017 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.247922 5017 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 06:35:14 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.251780 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.251829 5017 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.252117 5017 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.255528 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:05:30.541306605 +0000 UTC Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.255792 5017 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.255933 5017 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.257249 5017 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.257635 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.257730 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.258728 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.260372 5017 factory.go:153] Registering CRI-O factory Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.260432 5017 factory.go:221] Registration of the crio container factory successfully Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.260510 5017 server.go:460] "Adding debug handlers to kubelet server" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.260559 5017 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.260577 5017 factory.go:55] Registering systemd factory Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.260590 5017 factory.go:221] Registration of the systemd container factory successfully Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.260625 5017 factory.go:103] Registering Raw factory Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.260654 5017 manager.go:1196] Started watching for new ooms in manager Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.261762 5017 manager.go:319] Starting recovery of all containers Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.266586 5017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f202339c1ebe0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 06:35:14.244848608 +0000 UTC m=+0.619296228,LastTimestamp:2026-01-29 06:35:14.244848608 +0000 UTC m=+0.619296228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.271637 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.271749 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.271786 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.271807 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.271826 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.271845 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.271867 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.271904 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.271948 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.272021 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.272048 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.272075 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.272094 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.272119 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.272141 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274132 5017 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274168 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274187 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274202 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274218 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274231 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274280 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274327 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274342 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274356 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274370 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274383 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274423 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274440 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274453 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274523 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274537 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274549 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274561 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274572 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274583 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274594 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274605 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274617 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274628 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274638 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274650 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274662 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274673 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274707 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274719 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274731 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274744 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274754 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274763 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274774 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274784 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274795 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274810 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274823 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274834 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274845 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274854 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274866 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274876 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274885 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274895 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274936 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274948 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.274987 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275004 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275021 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275033 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275044 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275055 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275103 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275116 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275127 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275137 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275147 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275158 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275170 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275180 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275192 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275205 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275215 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275802 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275851 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275890 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275905 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275932 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275948 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.275981 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276008 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276023 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276047 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276062 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276078 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276098 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276112 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276150 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276173 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276190 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276210 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.276225 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280254 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280319 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280348 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280370 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280392 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280432 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280465 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280500 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280524 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280557 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280589 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280616 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280750 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280846 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280870 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280885 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280901 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280921 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.280936 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.281061 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.281105 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.281143 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.281162 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.281181 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.281205 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282253 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282279 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282303 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282320 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282338 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282391 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282411 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282433 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282450 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282465 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282547 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282564 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282583 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282598 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282614 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282634 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282650 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282673 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282688 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282720 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282740 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282759 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282776 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282789 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282804 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282826 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282856 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282875 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282917 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.282933 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283285 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283308 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283370 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283397 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283415 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283436 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283454 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283468 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283487 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283516 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283535 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283551 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283568 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283587 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283602 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283618 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283633 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283705 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283796 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283882 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283926 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.283951 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284008 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284027 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284042 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284058 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284075 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284089 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284109 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284125 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284141 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284190 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284248 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284272 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284298 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284316 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284375 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284392 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284409 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284427 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284444 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284459 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284480 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284522 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284543 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284558 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284572 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284597 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284612 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284633 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284698 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284713 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284729 5017 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284740 5017 reconstruct.go:97] "Volume reconstruction finished" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.284749 5017 reconciler.go:26] "Reconciler: start to sync state" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.288306 5017 manager.go:324] Recovery completed Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.303160 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.305083 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.305172 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.305187 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.306139 5017 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.306171 5017 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.306197 5017 state_mem.go:36] "Initialized new in-memory state store" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.312872 5017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.314776 5017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.314859 5017 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.314904 5017 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.314998 5017 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.316296 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.316347 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.325598 5017 policy_none.go:49] "None policy: Start" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.327666 5017 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.327693 5017 state_mem.go:35] "Initializing new in-memory state store" Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.352996 5017 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.379092 5017 manager.go:334] "Starting Device Plugin manager" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.379165 5017 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.379182 5017 server.go:79] "Starting device plugin registration server" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.379917 5017 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.379946 5017 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.380123 5017 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.380247 5017 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.380258 5017 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.391895 5017 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.415614 5017 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.415748 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.416894 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.416973 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.416991 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.417290 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.417499 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.417546 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.418348 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.418418 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.418434 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.418462 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.418490 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.418502 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.418715 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.418815 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.418854 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.419436 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.419470 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.419483 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.419626 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.419637 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.419658 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.419668 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.419721 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.419759 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.420444 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.420470 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.420480 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.420580 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.420600 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.420610 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.420621 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.420741 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.420772 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.421397 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.421429 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.421441 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.421609 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.421640 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.421649 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.421695 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.421709 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.422639 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.422674 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.422686 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.460272 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.481072 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.482651 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.482694 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.482706 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.482739 5017 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.483199 5017 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489117 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489190 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489229 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489434 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489461 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489486 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489513 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489534 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489552 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489569 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489854 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489883 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489906 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489933 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.489950 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592038 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592175 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592211 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592263 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592296 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592345 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592371 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592397 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592407 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592521 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592445 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592425 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592581 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592543 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592651 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592702 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592708 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592736 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592704 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592854 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592899 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.592940 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.593018 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.593035 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.593075 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.593117 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.593146 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.593186 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.593300 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.593520 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.683905 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.686068 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.686139 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.686157 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.686196 5017 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.686953 5017 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.749064 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.754509 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.774335 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.792090 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: I0129 06:35:14.795001 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.795084 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b8e4e1fa46dae213c374e7d8af2a1f87726306681fd2e576379efd5a9206b58d WatchSource:0}: Error finding container b8e4e1fa46dae213c374e7d8af2a1f87726306681fd2e576379efd5a9206b58d: Status 404 returned error can't find the container with id b8e4e1fa46dae213c374e7d8af2a1f87726306681fd2e576379efd5a9206b58d Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.812074 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-28eb22bcdbc8bfa8ba7b113d46476f87358fd5e8063dfbad44fe7d28845eb121 WatchSource:0}: Error finding container 28eb22bcdbc8bfa8ba7b113d46476f87358fd5e8063dfbad44fe7d28845eb121: Status 404 returned error can't find the container with id 28eb22bcdbc8bfa8ba7b113d46476f87358fd5e8063dfbad44fe7d28845eb121 Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.815388 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0d55f0e441094f5a8d67ca0820bf45be39eb4ab4e8049c2b7566048aed8d5abc WatchSource:0}: Error finding container 0d55f0e441094f5a8d67ca0820bf45be39eb4ab4e8049c2b7566048aed8d5abc: Status 404 returned error can't find the container with id 0d55f0e441094f5a8d67ca0820bf45be39eb4ab4e8049c2b7566048aed8d5abc Jan 29 06:35:14 crc kubenswrapper[5017]: W0129 06:35:14.824294 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4a589feb3984d357e0b4de79ff054d10f89f65ee322708de316ce2957e09bea6 WatchSource:0}: Error finding container 4a589feb3984d357e0b4de79ff054d10f89f65ee322708de316ce2957e09bea6: Status 404 returned error can't find the container with id 4a589feb3984d357e0b4de79ff054d10f89f65ee322708de316ce2957e09bea6 Jan 29 06:35:14 crc kubenswrapper[5017]: E0129 06:35:14.861079 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.087397 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.091050 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.091103 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.091118 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.091153 5017 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:35:15 crc kubenswrapper[5017]: E0129 06:35:15.091693 5017 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 29 06:35:15 crc kubenswrapper[5017]: W0129 06:35:15.200394 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:15 crc kubenswrapper[5017]: E0129 06:35:15.200497 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:15 crc kubenswrapper[5017]: W0129 06:35:15.227140 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:15 crc kubenswrapper[5017]: E0129 06:35:15.227187 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.249547 5017 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.256511 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:08:10.90979457 +0000 UTC Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.320232 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"88fe17ea6f731b9f508e19fbe316a45d787007ac7fd038248dd489ee1a448a82"} Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.321478 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4a589feb3984d357e0b4de79ff054d10f89f65ee322708de316ce2957e09bea6"} Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.322937 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d55f0e441094f5a8d67ca0820bf45be39eb4ab4e8049c2b7566048aed8d5abc"} Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.324563 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"28eb22bcdbc8bfa8ba7b113d46476f87358fd5e8063dfbad44fe7d28845eb121"} Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.325986 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b8e4e1fa46dae213c374e7d8af2a1f87726306681fd2e576379efd5a9206b58d"} Jan 29 06:35:15 crc kubenswrapper[5017]: W0129 06:35:15.444271 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:15 crc kubenswrapper[5017]: E0129 06:35:15.444384 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:15 crc kubenswrapper[5017]: W0129 06:35:15.486190 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:15 crc kubenswrapper[5017]: E0129 06:35:15.486320 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:15 crc kubenswrapper[5017]: E0129 06:35:15.662544 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.892872 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.895602 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.895665 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.895683 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:15 crc kubenswrapper[5017]: I0129 06:35:15.895767 5017 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:35:15 crc kubenswrapper[5017]: E0129 06:35:15.896536 5017 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.235778 5017 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 06:35:16 crc kubenswrapper[5017]: E0129 06:35:16.237802 5017 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.249185 5017 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.257113 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:54:46.859858902 +0000 UTC Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.331364 5017 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f4427a913cdd88eae362ca579529cd4379af137323e8867938f098f9c83ecfce" exitCode=0 Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.331448 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f4427a913cdd88eae362ca579529cd4379af137323e8867938f098f9c83ecfce"} Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.331506 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.332542 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.332574 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.332583 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.333351 5017 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655" exitCode=0 Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.333404 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.333429 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655"} Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.334314 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.334360 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.334374 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.339783 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16"} Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.339896 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d"} Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.339919 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897"} Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.339933 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d"} Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.339847 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.341610 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.341718 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.341802 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.341922 5017 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3" exitCode=0 Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.341991 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3"} Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.342279 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.343226 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.343348 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.343441 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.343744 5017 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="de8f2cd2a8ecded6e6a3640faf4b6baced8aa978dd502e0bc0b5b427976b9e44" exitCode=0 Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.343798 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"de8f2cd2a8ecded6e6a3640faf4b6baced8aa978dd502e0bc0b5b427976b9e44"} Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.343834 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.344552 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.344588 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.344600 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.351468 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.357511 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.357566 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:16 crc kubenswrapper[5017]: I0129 06:35:16.357579 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:16 crc kubenswrapper[5017]: W0129 06:35:16.876918 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:16 crc kubenswrapper[5017]: E0129 06:35:16.877045 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.249911 5017 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.257321 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:11:41.156592417 +0000 UTC Jan 29 06:35:17 crc kubenswrapper[5017]: E0129 06:35:17.264407 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.359103 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.359152 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6d7e73818c30d09e31eca5e1aaea21b564796b5fb0445eb14884781ff264ea9e"} Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.360071 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.360103 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.360112 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.363460 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8347eb7e0b582340e93bb623b71854b1b475a52c31cf20a88108a517dc2e2e65"} Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.363524 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"058b4b1e91f79b4a7fa8f73ecae30bea929f40f5f38db0faf800c45c93580ef3"} Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.363543 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f1305407f047a3e06c10dc50b0bd12943e1330ff2d277680046be5a59ddeeaf5"} Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.363666 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.365285 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.365324 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.365341 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.371233 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084"} Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.371276 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543"} Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.371292 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77"} Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.371308 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326"} Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.373609 5017 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4b0e6f578a27bbd90f51d46d34aac2c7b9c07d0cd2690d0ee12ec352849b2fef" exitCode=0 Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.373719 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.373797 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.374422 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4b0e6f578a27bbd90f51d46d34aac2c7b9c07d0cd2690d0ee12ec352849b2fef"} Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.375066 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.375104 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.375117 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.375791 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.375822 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.375834 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:17 crc kubenswrapper[5017]: W0129 06:35:17.438438 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:17 crc kubenswrapper[5017]: E0129 06:35:17.438579 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.497182 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.498722 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.498776 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.498790 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:17 crc kubenswrapper[5017]: I0129 06:35:17.498826 5017 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:35:17 crc kubenswrapper[5017]: E0129 06:35:17.499731 5017 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Jan 29 06:35:17 crc kubenswrapper[5017]: W0129 06:35:17.783184 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Jan 29 06:35:17 crc kubenswrapper[5017]: E0129 06:35:17.783290 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.257892 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:02:58.087837572 +0000 UTC Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.383723 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394"} Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.383800 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.384993 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.385030 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.385044 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.387289 5017 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9efae57a493ca7716629bf6c7cd8cf0d1ddcf01091f18c87ff3e6812cfa4593b" exitCode=0 Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.387389 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.387509 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.387770 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9efae57a493ca7716629bf6c7cd8cf0d1ddcf01091f18c87ff3e6812cfa4593b"} Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.387865 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.387893 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.388678 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.388761 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.388784 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.389324 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.389379 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.389398 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.389465 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.389541 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:18 crc kubenswrapper[5017]: I0129 06:35:18.389574 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.025075 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.258577 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:47:22.896698969 +0000 UTC Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.393912 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.394026 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6a52f555621deaf2046b5a32102dcc0e71cef4f0ad6a4787ed84c2654030987"} Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.394067 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2e9930d8b71d863552f949f726bde2a4a825033a9dc1b3b6a8115de14182d39a"} Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.394085 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"59df12089b16cfee9426e1fcab07f3dcb969faac25d775ed011d46074bd3bfd9"} Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.394130 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.395135 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.395221 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:19 crc kubenswrapper[5017]: I0129 06:35:19.395234 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.259556 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:54:10.971982691 +0000 UTC Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.312071 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.316567 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.316830 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.319107 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.319178 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.319197 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.347819 5017 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.402753 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.402826 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.402856 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.403372 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f39295e76e2adb0913db8e82e26d400c1ac433dd104fc51f2c9b751f0704261a"} Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.403459 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8c3c974d0c7f106920a7937ae639897c03f294695e05928a29799663cff22f8"} Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.404160 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.404209 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.404226 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.404391 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.404429 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.404441 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.700635 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.702725 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.702778 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.702789 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.702825 5017 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.966727 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.967152 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.969034 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.969142 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:20 crc kubenswrapper[5017]: I0129 06:35:20.969167 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.260629 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:28:17.768207918 +0000 UTC Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.405788 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.405875 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.405932 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.407359 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.407420 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.407440 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.408835 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.408877 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:21 crc kubenswrapper[5017]: I0129 06:35:21.408891 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:22 crc kubenswrapper[5017]: I0129 06:35:22.260852 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:10:49.925919211 +0000 UTC Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.261910 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:07:32.873734963 +0000 UTC Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.315319 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.315643 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.318421 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.318498 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.318520 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.613664 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.613875 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.615732 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.615780 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:23 crc kubenswrapper[5017]: I0129 06:35:23.615797 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:24 crc kubenswrapper[5017]: I0129 06:35:24.263206 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:12:20.308650012 +0000 UTC Jan 29 06:35:24 crc kubenswrapper[5017]: E0129 06:35:24.392076 5017 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.264164 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:29:15.238131988 +0000 UTC Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.299519 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.299728 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.300766 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.300793 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.300801 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.400216 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.400519 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.402521 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.402567 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.402578 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.508464 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.508669 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.512730 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.512778 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.512801 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:25 crc kubenswrapper[5017]: I0129 06:35:25.515915 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:26 crc kubenswrapper[5017]: I0129 06:35:26.264575 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:41:02.13697657 +0000 UTC Jan 29 06:35:26 crc kubenswrapper[5017]: I0129 06:35:26.419875 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:26 crc kubenswrapper[5017]: I0129 06:35:26.421448 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:26 crc kubenswrapper[5017]: I0129 06:35:26.421587 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:26 crc kubenswrapper[5017]: I0129 06:35:26.421717 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:26 crc kubenswrapper[5017]: I0129 06:35:26.425249 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:27 crc kubenswrapper[5017]: I0129 06:35:27.265441 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:49:26.954131238 +0000 UTC Jan 29 06:35:27 crc kubenswrapper[5017]: I0129 06:35:27.423192 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:27 crc kubenswrapper[5017]: I0129 06:35:27.424646 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:27 crc kubenswrapper[5017]: I0129 06:35:27.424694 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:27 crc kubenswrapper[5017]: I0129 06:35:27.424706 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:28 crc kubenswrapper[5017]: I0129 06:35:28.252234 5017 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 06:35:28 crc kubenswrapper[5017]: I0129 06:35:28.265574 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:06:52.047698279 +0000 UTC Jan 29 06:35:28 crc kubenswrapper[5017]: I0129 06:35:28.400310 5017 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 06:35:28 crc kubenswrapper[5017]: I0129 06:35:28.400429 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 06:35:28 crc kubenswrapper[5017]: W0129 06:35:28.497869 5017 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 06:35:28 crc kubenswrapper[5017]: I0129 06:35:28.498397 5017 trace.go:236] Trace[2108902142]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 06:35:18.493) (total time: 10004ms): Jan 29 06:35:28 crc kubenswrapper[5017]: Trace[2108902142]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (06:35:28.497) Jan 29 06:35:28 crc kubenswrapper[5017]: Trace[2108902142]: [10.004436768s] [10.004436768s] END Jan 29 06:35:28 crc kubenswrapper[5017]: E0129 06:35:28.498440 5017 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 06:35:28 crc kubenswrapper[5017]: I0129 06:35:28.564625 5017 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 06:35:28 crc kubenswrapper[5017]: I0129 06:35:28.564703 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 06:35:28 crc kubenswrapper[5017]: I0129 06:35:28.568829 5017 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 06:35:28 crc kubenswrapper[5017]: I0129 06:35:28.568927 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 06:35:29 crc kubenswrapper[5017]: I0129 06:35:29.032936 5017 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]log ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]etcd ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/priority-and-fairness-filter ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-apiextensions-informers ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-apiextensions-controllers ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/crd-informer-synced ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-system-namespaces-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 29 06:35:29 crc kubenswrapper[5017]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 29 06:35:29 crc kubenswrapper[5017]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/bootstrap-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/start-kube-aggregator-informers ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/apiservice-registration-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/apiservice-discovery-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]autoregister-completion ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/apiservice-openapi-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 29 06:35:29 crc kubenswrapper[5017]: livez check failed Jan 29 06:35:29 crc kubenswrapper[5017]: I0129 06:35:29.033093 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:35:29 crc kubenswrapper[5017]: I0129 06:35:29.266740 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:02:36.65311693 +0000 UTC Jan 29 06:35:29 crc kubenswrapper[5017]: I0129 06:35:29.865442 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 06:35:29 crc kubenswrapper[5017]: I0129 06:35:29.865878 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:29 crc kubenswrapper[5017]: I0129 06:35:29.867536 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:29 crc kubenswrapper[5017]: I0129 06:35:29.867602 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:29 crc kubenswrapper[5017]: I0129 06:35:29.867620 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:29 crc kubenswrapper[5017]: I0129 06:35:29.907187 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 06:35:30 crc kubenswrapper[5017]: I0129 06:35:30.267663 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:47:07.097320762 +0000 UTC Jan 29 06:35:30 crc kubenswrapper[5017]: I0129 06:35:30.317893 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 06:35:30 crc kubenswrapper[5017]: I0129 06:35:30.432447 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:30 crc kubenswrapper[5017]: I0129 06:35:30.433684 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:30 crc kubenswrapper[5017]: I0129 06:35:30.433747 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:30 crc kubenswrapper[5017]: I0129 06:35:30.433760 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:31 crc kubenswrapper[5017]: I0129 06:35:31.268741 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:07:11.127175088 +0000 UTC Jan 29 06:35:31 crc kubenswrapper[5017]: I0129 06:35:31.435916 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:31 crc kubenswrapper[5017]: I0129 06:35:31.437247 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:31 crc kubenswrapper[5017]: I0129 06:35:31.437300 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:31 crc kubenswrapper[5017]: I0129 06:35:31.437313 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:32 crc kubenswrapper[5017]: I0129 06:35:32.269165 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 12:29:42.947626449 +0000 UTC Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.269871 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:37:17.020414106 +0000 UTC Jan 29 06:35:33 crc kubenswrapper[5017]: E0129 06:35:33.555065 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.558230 5017 trace.go:236] Trace[310789884]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 06:35:22.818) (total time: 10739ms): Jan 29 06:35:33 crc kubenswrapper[5017]: Trace[310789884]: ---"Objects listed" error: 10739ms (06:35:33.558) Jan 29 06:35:33 crc kubenswrapper[5017]: Trace[310789884]: [10.739804464s] [10.739804464s] END Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.558262 5017 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 06:35:33 crc kubenswrapper[5017]: E0129 06:35:33.558622 5017 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.559894 5017 trace.go:236] Trace[1702333145]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 06:35:21.245) (total time: 12314ms): Jan 29 06:35:33 crc kubenswrapper[5017]: Trace[1702333145]: ---"Objects listed" error: 12314ms (06:35:33.559) Jan 29 06:35:33 crc kubenswrapper[5017]: Trace[1702333145]: [12.314217897s] [12.314217897s] END Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.559935 5017 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.560570 5017 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.561662 5017 trace.go:236] Trace[922049910]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 06:35:20.655) (total time: 12906ms): Jan 29 06:35:33 crc kubenswrapper[5017]: Trace[922049910]: ---"Objects listed" error: 12906ms (06:35:33.561) Jan 29 06:35:33 crc kubenswrapper[5017]: Trace[922049910]: [12.906210728s] [12.906210728s] END Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.561687 5017 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.562807 5017 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.580352 5017 csr.go:261] certificate signing request csr-mm2cd is approved, waiting to be issued Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.591146 5017 csr.go:257] certificate signing request csr-mm2cd is issued Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.596453 5017 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60646->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.596518 5017 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60636->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.596533 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60646->192.168.126.11:17697: read: connection reset by peer" Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.596590 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60636->192.168.126.11:17697: read: connection reset by peer" Jan 29 06:35:33 crc kubenswrapper[5017]: I0129 06:35:33.840807 5017 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.032599 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.033488 5017 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.033604 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.036937 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.093020 5017 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 06:35:34 crc kubenswrapper[5017]: W0129 06:35:34.093270 5017 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 06:35:34 crc kubenswrapper[5017]: W0129 06:35:34.093279 5017 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 06:35:34 crc kubenswrapper[5017]: W0129 06:35:34.093281 5017 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 06:35:34 crc kubenswrapper[5017]: W0129 06:35:34.093382 5017 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.093282 5017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.154:59558->38.102.83.154:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188f20235b80ae9e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 06:35:14.81099843 +0000 UTC m=+1.185446040,LastTimestamp:2026-01-29 06:35:14.81099843 +0000 UTC m=+1.185446040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.244600 5017 apiserver.go:52] "Watching apiserver" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.250418 5017 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.250978 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.251427 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.251517 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.251575 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.251613 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.251725 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.251996 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.252009 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.252284 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.252382 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.253459 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.253461 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.253555 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.253711 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.253968 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.255293 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.255468 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.255593 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.255767 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.257888 5017 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265367 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265430 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265456 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265480 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265498 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265519 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265535 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265554 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265571 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265588 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265632 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265656 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265678 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265693 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265708 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265724 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265761 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265784 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265770 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265802 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265824 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265843 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265859 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265901 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265918 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265917 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.265943 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266071 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266107 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266137 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266160 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266180 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266208 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266231 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266253 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266277 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266323 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266345 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266370 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266392 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266418 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266439 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266460 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266480 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266484 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266504 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266536 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266555 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266577 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266602 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266629 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266649 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266657 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266616 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266697 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266730 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266978 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266994 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266998 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.267314 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.267325 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.267381 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.267597 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.267737 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.267787 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.267822 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.267921 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.266669 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.267987 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268013 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268037 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268064 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268094 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268117 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268142 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268164 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268218 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268244 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268263 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268282 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268309 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268332 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268356 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268379 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268485 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268527 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268557 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268586 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268606 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268624 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268644 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268661 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268680 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268700 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269231 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269261 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269288 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269311 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269335 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269361 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269462 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269485 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269505 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269526 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269549 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269566 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269587 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269608 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269634 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269661 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269680 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269698 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269719 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269738 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269779 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269797 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269817 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269838 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269855 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269873 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269890 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269909 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269927 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269999 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270020 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270036 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270053 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270075 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270092 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270108 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270128 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270146 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270167 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270195 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270226 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270253 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270273 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270291 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270308 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270327 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270347 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270364 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270383 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270403 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270422 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270440 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270460 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270477 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270496 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270512 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270528 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270548 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270566 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270590 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270612 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270630 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270653 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270672 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270696 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270716 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270735 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270753 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270771 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270790 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270810 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270827 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270845 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270864 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270882 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270899 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270915 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270933 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270975 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270996 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271015 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271034 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271051 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271067 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271082 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271099 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271119 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271136 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271153 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271172 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271193 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271215 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271234 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271254 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271275 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271297 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271318 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271339 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271359 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271380 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271398 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271417 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271451 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271471 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271487 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271510 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271530 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271550 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271570 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271594 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271623 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271641 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271657 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271677 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271697 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271715 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271733 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271749 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271768 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271821 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271852 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271877 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271902 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271924 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271943 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271988 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272010 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272033 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272051 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272071 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272094 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272114 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272138 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272219 5017 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272232 5017 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272243 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272253 5017 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272264 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272274 5017 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272283 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272294 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272304 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272315 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272326 5017 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272335 5017 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272347 5017 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272359 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272369 5017 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272379 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272388 5017 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272399 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268061 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268070 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268273 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268669 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.268788 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269193 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269197 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.269743 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270106 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270439 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270633 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270649 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270670 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:23:55.898021785 +0000 UTC Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.270804 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274850 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271226 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271310 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271381 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271502 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271766 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271774 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.271782 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272007 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272281 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272304 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.272444 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.273021 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.273645 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.273656 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.273670 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274122 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274139 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274138 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274242 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274255 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274488 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274612 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274642 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.274975 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275202 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275072 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275238 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275412 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275612 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275638 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275663 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275731 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275685 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.275846 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:35:34.775817456 +0000 UTC m=+21.150265086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275937 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.276048 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.275653 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.276481 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.276491 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.277105 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.277726 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.278273 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.278328 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.278481 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.278541 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.278596 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.278705 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.278865 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.278997 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.280009 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.280054 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.280183 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.280440 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.280872 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.281911 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.282107 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.282141 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.282179 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.282298 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.282425 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.282627 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.282671 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.282773 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.282747 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.283504 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.283575 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.283145 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284580 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284612 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.283888 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284136 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284309 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284325 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284359 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284746 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284407 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284778 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284337 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284531 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284554 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284530 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284854 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284859 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.284873 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.285105 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.285145 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.285160 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.285456 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.285390 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.285487 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.285579 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.285720 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.285917 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.286049 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.286073 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.286148 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.286245 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.287308 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.287450 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.287648 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.288075 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.288207 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.288389 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.289149 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.289209 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.289340 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.289472 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.289505 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.289533 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.289558 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.289601 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.289787 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.290807 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.290906 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.291397 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.291453 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.291484 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.291543 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.292030 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.292789 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.293344 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.292395 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.293695 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.293999 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.294240 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.294332 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.294551 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.294925 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.295013 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.295090 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.295477 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.295715 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.295926 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.296074 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.297391 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.298209 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.298542 5017 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.293875 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.298628 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.298745 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.299033 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.299069 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.299265 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.299530 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.299809 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.300107 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.300240 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.300299 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.300391 5017 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.300728 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:34.800706816 +0000 UTC m=+21.175154426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.300785 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.301122 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.301387 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.301403 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.301632 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.301633 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.301701 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.301789 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.301850 5017 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.301918 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:34.801898125 +0000 UTC m=+21.176345735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.302018 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.302127 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.302262 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.302478 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.302550 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.302690 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.303302 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.303974 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.309654 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.314926 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.316518 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.324356 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.324419 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.324473 5017 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.324576 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:34.824547701 +0000 UTC m=+21.198995311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.326265 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.326688 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.328613 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.328643 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.328660 5017 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.328721 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:34.828700731 +0000 UTC m=+21.203148341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.331760 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.333473 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.335338 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.345904 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.346816 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.348501 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.348996 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.349645 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.351751 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.352567 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.353996 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.354937 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.356885 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.358351 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.359395 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.359574 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.360194 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.361530 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.362610 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.363626 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.364244 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.364889 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.365387 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.366011 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.367041 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.367501 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.368161 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.369715 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.370499 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.371424 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.371957 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373264 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373361 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373305 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373484 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373571 5017 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373590 5017 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373604 5017 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373615 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373625 5017 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373635 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373644 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373654 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373667 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373679 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373690 5017 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373701 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373712 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373721 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373731 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373742 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373752 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373762 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373772 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373782 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373792 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373802 5017 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373811 5017 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373820 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373829 5017 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373839 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373851 5017 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373864 5017 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373877 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373909 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373921 5017 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373931 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373940 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373951 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373980 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.373989 5017 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374006 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374017 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374027 5017 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374036 5017 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374045 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374055 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374065 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374074 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374084 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374094 5017 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374104 5017 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374113 5017 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374124 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374139 5017 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374149 5017 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374158 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374173 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374182 5017 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374191 5017 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374202 5017 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374212 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374220 5017 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374230 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374241 5017 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374250 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374259 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374269 5017 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374278 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374287 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374296 5017 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374335 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374346 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374357 5017 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374367 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374377 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374386 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374395 5017 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374404 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374416 5017 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374425 5017 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374435 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374445 5017 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374454 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374464 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374475 5017 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374487 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374496 5017 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374507 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374516 5017 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374526 5017 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374536 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374547 5017 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374558 5017 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374577 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374586 5017 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374595 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374606 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374616 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374626 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374634 5017 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374643 5017 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374652 5017 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374661 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374669 5017 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374679 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374688 5017 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374697 5017 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374705 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374714 5017 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374723 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374734 5017 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374744 5017 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374758 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374768 5017 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374778 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374787 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374796 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374805 5017 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374814 5017 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374823 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374833 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374842 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374852 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374861 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374870 5017 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374881 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374891 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374902 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374912 5017 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374923 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374934 5017 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374944 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.374984 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375015 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375027 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375036 5017 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375046 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375098 5017 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375109 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375119 5017 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375128 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375137 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375146 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375154 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375163 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375173 5017 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375182 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375191 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375200 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375209 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375218 5017 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375227 5017 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375236 5017 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375244 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375253 5017 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375263 5017 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375272 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375283 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375291 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375300 5017 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375309 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375318 5017 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375326 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375335 5017 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375344 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375353 5017 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375362 5017 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375373 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375383 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375391 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375401 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375410 5017 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375419 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375429 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375437 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375446 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375456 5017 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375465 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375475 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.375484 5017 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.376662 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.377325 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.378380 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.379870 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.381484 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.382118 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.382778 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.383734 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.384308 5017 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.384424 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.386622 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.387215 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.387703 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.390915 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.392830 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.393827 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.394574 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.395739 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.396886 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.397393 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.398097 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.399180 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.400364 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.401008 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.402675 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.403732 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.404030 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.405012 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.405532 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.406439 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.407143 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.407710 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.408742 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.409282 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.416159 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.427398 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.440376 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.444883 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a002ba4568db95e910f967fe9c047bcb392bd7fad013477882aabba2705a13dd"} Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.447455 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.449280 5017 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394" exitCode=255 Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.449316 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394"} Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.452925 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.455080 5017 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.455295 5017 scope.go:117] "RemoveContainer" containerID="27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.470338 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.482892 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.495017 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.515653 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.531649 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.552232 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.567886 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.569762 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.584869 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: W0129 06:35:34.588432 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8bf4381031a4ba3e51287ce7a1b35aea9b03c5bb028d7e40184102e90fe7b677 WatchSource:0}: Error finding container 8bf4381031a4ba3e51287ce7a1b35aea9b03c5bb028d7e40184102e90fe7b677: Status 404 returned error can't find the container with id 8bf4381031a4ba3e51287ce7a1b35aea9b03c5bb028d7e40184102e90fe7b677 Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.592450 5017 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 06:30:33 +0000 UTC, rotation deadline is 2026-11-20 21:04:50.328521878 +0000 UTC Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.592515 5017 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7094h29m15.736009688s for next certificate rotation Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.597676 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.598091 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.612535 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: W0129 06:35:34.621079 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-d771d64a896bf5895de1c300a9c5c13f0b59de07f8df81598f82b89043b0d971 WatchSource:0}: Error finding container d771d64a896bf5895de1c300a9c5c13f0b59de07f8df81598f82b89043b0d971: Status 404 returned error can't find the container with id d771d64a896bf5895de1c300a9c5c13f0b59de07f8df81598f82b89043b0d971 Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.625398 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.637266 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.781030 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.781151 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:35:35.781127637 +0000 UTC m=+22.155575247 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.882545 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.882599 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.882630 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:34 crc kubenswrapper[5017]: I0129 06:35:34.882658 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.882746 5017 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.882764 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.882783 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.882795 5017 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.882796 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.882841 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.882856 5017 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.882797 5017 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.882809 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:35.882794128 +0000 UTC m=+22.257241738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.883026 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:35.882997433 +0000 UTC m=+22.257445043 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.883048 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:35.883040244 +0000 UTC m=+22.257487854 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:34 crc kubenswrapper[5017]: E0129 06:35:34.883067 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:35.883060124 +0000 UTC m=+22.257507734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.173881 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vwppb"] Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.174214 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-895pl"] Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.174422 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-m2gbd"] Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.174661 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.174700 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.175263 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qwq46"] Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.175608 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9jkcd"] Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.176040 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qwq46" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.176087 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.176224 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.178508 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.179791 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.179906 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.180066 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.180072 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.180092 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.179904 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.179920 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.179934 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.180303 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.180403 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.180717 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.180893 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.181268 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.183271 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.184605 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.184797 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.184870 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.185931 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bb597cc2-fff7-4d90-a43e-958791d83324-serviceca\") pod \"node-ca-vwppb\" (UID: \"bb597cc2-fff7-4d90-a43e-958791d83324\") " pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186004 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-cnibin\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186032 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-os-release\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186060 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z89l9\" (UniqueName: \"kubernetes.io/projected/c4d3122b-b4b4-41ac-896a-566afdcda936-kube-api-access-z89l9\") pod \"node-resolver-qwq46\" (UID: \"c4d3122b-b4b4-41ac-896a-566afdcda936\") " pod="openshift-dns/node-resolver-qwq46" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186089 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84hz\" (UniqueName: \"kubernetes.io/projected/8ae056f0-e054-45da-9638-73074b7c8a3b-kube-api-access-x84hz\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186115 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-cnibin\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186141 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-conf-dir\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186166 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-etc-kubernetes\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186208 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-system-cni-dir\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186235 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-var-lib-cni-multus\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186259 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-daemon-config\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186286 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvm87\" (UniqueName: \"kubernetes.io/projected/4036e581-21bf-4ea0-aaf5-84ab8a841888-kube-api-access-bvm87\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186313 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c4d3122b-b4b4-41ac-896a-566afdcda936-hosts-file\") pod \"node-resolver-qwq46\" (UID: \"c4d3122b-b4b4-41ac-896a-566afdcda936\") " pod="openshift-dns/node-resolver-qwq46" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186342 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-run-k8s-cni-cncf-io\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186408 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-var-lib-cni-bin\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186479 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-run-multus-certs\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186539 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdt9\" (UniqueName: \"kubernetes.io/projected/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-kube-api-access-5qdt9\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186620 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4036e581-21bf-4ea0-aaf5-84ab8a841888-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186674 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-rootfs\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186702 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-mcd-auth-proxy-config\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186738 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-run-netns\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186764 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb597cc2-fff7-4d90-a43e-958791d83324-host\") pod \"node-ca-vwppb\" (UID: \"bb597cc2-fff7-4d90-a43e-958791d83324\") " pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186788 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-var-lib-kubelet\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186895 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-system-cni-dir\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186931 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4036e581-21bf-4ea0-aaf5-84ab8a841888-cni-binary-copy\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.186971 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8ae056f0-e054-45da-9638-73074b7c8a3b-cni-binary-copy\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.187001 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g28w2\" (UniqueName: \"kubernetes.io/projected/bb597cc2-fff7-4d90-a43e-958791d83324-kube-api-access-g28w2\") pod \"node-ca-vwppb\" (UID: \"bb597cc2-fff7-4d90-a43e-958791d83324\") " pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.187024 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.187049 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-os-release\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.187078 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-socket-dir-parent\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.187102 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-proxy-tls\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.187126 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-cni-dir\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.187152 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-hostroot\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.189096 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.193498 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.211602 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.223667 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.232956 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.242115 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.253633 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.266261 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.275222 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:39:24.902150391 +0000 UTC Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.282889 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.288284 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-proxy-tls\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.288439 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-cni-dir\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.288852 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-cni-dir\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289010 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-hostroot\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289106 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-hostroot\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289195 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bb597cc2-fff7-4d90-a43e-958791d83324-serviceca\") pod \"node-ca-vwppb\" (UID: \"bb597cc2-fff7-4d90-a43e-958791d83324\") " pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289288 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-cnibin\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289333 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-os-release\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289368 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z89l9\" (UniqueName: \"kubernetes.io/projected/c4d3122b-b4b4-41ac-896a-566afdcda936-kube-api-access-z89l9\") pod \"node-resolver-qwq46\" (UID: \"c4d3122b-b4b4-41ac-896a-566afdcda936\") " pod="openshift-dns/node-resolver-qwq46" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289402 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x84hz\" (UniqueName: \"kubernetes.io/projected/8ae056f0-e054-45da-9638-73074b7c8a3b-kube-api-access-x84hz\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289423 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-cnibin\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289436 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-cnibin\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289504 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-cnibin\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289659 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-conf-dir\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289754 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-system-cni-dir\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289781 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-var-lib-cni-multus\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289784 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-conf-dir\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289804 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-daemon-config\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289832 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-etc-kubernetes\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289848 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-var-lib-cni-multus\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289863 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvm87\" (UniqueName: \"kubernetes.io/projected/4036e581-21bf-4ea0-aaf5-84ab8a841888-kube-api-access-bvm87\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289883 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-etc-kubernetes\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289778 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-os-release\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289895 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c4d3122b-b4b4-41ac-896a-566afdcda936-hosts-file\") pod \"node-resolver-qwq46\" (UID: \"c4d3122b-b4b4-41ac-896a-566afdcda936\") " pod="openshift-dns/node-resolver-qwq46" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.289846 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-system-cni-dir\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290003 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c4d3122b-b4b4-41ac-896a-566afdcda936-hosts-file\") pod \"node-resolver-qwq46\" (UID: \"c4d3122b-b4b4-41ac-896a-566afdcda936\") " pod="openshift-dns/node-resolver-qwq46" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290009 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-run-k8s-cni-cncf-io\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290047 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-var-lib-cni-bin\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290065 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-run-k8s-cni-cncf-io\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290094 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-run-multus-certs\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290131 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdt9\" (UniqueName: \"kubernetes.io/projected/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-kube-api-access-5qdt9\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290180 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4036e581-21bf-4ea0-aaf5-84ab8a841888-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290192 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-var-lib-cni-bin\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290217 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-rootfs\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290244 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-mcd-auth-proxy-config\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290240 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-run-multus-certs\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290268 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-run-netns\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290304 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-rootfs\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290318 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb597cc2-fff7-4d90-a43e-958791d83324-host\") pod \"node-ca-vwppb\" (UID: \"bb597cc2-fff7-4d90-a43e-958791d83324\") " pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290342 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-var-lib-kubelet\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290362 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-run-netns\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290382 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-system-cni-dir\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290404 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bb597cc2-fff7-4d90-a43e-958791d83324-host\") pod \"node-ca-vwppb\" (UID: \"bb597cc2-fff7-4d90-a43e-958791d83324\") " pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290405 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4036e581-21bf-4ea0-aaf5-84ab8a841888-cni-binary-copy\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290443 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8ae056f0-e054-45da-9638-73074b7c8a3b-cni-binary-copy\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290470 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g28w2\" (UniqueName: \"kubernetes.io/projected/bb597cc2-fff7-4d90-a43e-958791d83324-kube-api-access-g28w2\") pod \"node-ca-vwppb\" (UID: \"bb597cc2-fff7-4d90-a43e-958791d83324\") " pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290497 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290523 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-os-release\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290552 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-socket-dir-parent\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.290633 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-socket-dir-parent\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.291040 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-host-var-lib-kubelet\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.291045 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4036e581-21bf-4ea0-aaf5-84ab8a841888-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.291069 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4036e581-21bf-4ea0-aaf5-84ab8a841888-cni-binary-copy\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.291088 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-system-cni-dir\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.291099 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-mcd-auth-proxy-config\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.291133 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8ae056f0-e054-45da-9638-73074b7c8a3b-os-release\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.291531 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bb597cc2-fff7-4d90-a43e-958791d83324-serviceca\") pod \"node-ca-vwppb\" (UID: \"bb597cc2-fff7-4d90-a43e-958791d83324\") " pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.291638 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4036e581-21bf-4ea0-aaf5-84ab8a841888-tuning-conf-dir\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.292781 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8ae056f0-e054-45da-9638-73074b7c8a3b-cni-binary-copy\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.292890 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8ae056f0-e054-45da-9638-73074b7c8a3b-multus-daemon-config\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.295789 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.297226 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-proxy-tls\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.311254 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdt9\" (UniqueName: \"kubernetes.io/projected/2672ef63-7861-4c3d-a1b4-03cc9d18f8e2-kube-api-access-5qdt9\") pod \"machine-config-daemon-895pl\" (UID: \"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\") " pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.311267 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvm87\" (UniqueName: \"kubernetes.io/projected/4036e581-21bf-4ea0-aaf5-84ab8a841888-kube-api-access-bvm87\") pod \"multus-additional-cni-plugins-m2gbd\" (UID: \"4036e581-21bf-4ea0-aaf5-84ab8a841888\") " pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.311871 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z89l9\" (UniqueName: \"kubernetes.io/projected/c4d3122b-b4b4-41ac-896a-566afdcda936-kube-api-access-z89l9\") pod \"node-resolver-qwq46\" (UID: \"c4d3122b-b4b4-41ac-896a-566afdcda936\") " pod="openshift-dns/node-resolver-qwq46" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.312066 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x84hz\" (UniqueName: \"kubernetes.io/projected/8ae056f0-e054-45da-9638-73074b7c8a3b-kube-api-access-x84hz\") pod \"multus-9jkcd\" (UID: \"8ae056f0-e054-45da-9638-73074b7c8a3b\") " pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.312258 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.320401 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.321030 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g28w2\" (UniqueName: \"kubernetes.io/projected/bb597cc2-fff7-4d90-a43e-958791d83324-kube-api-access-g28w2\") pod \"node-ca-vwppb\" (UID: \"bb597cc2-fff7-4d90-a43e-958791d83324\") " pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.337209 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.353683 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.368146 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.384203 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.397027 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.404568 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.408631 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.410026 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.414037 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.423782 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.436619 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.450307 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.452738 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.454177 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2"} Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.454477 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.455746 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7"} Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.455987 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab"} Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.456524 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d771d64a896bf5895de1c300a9c5c13f0b59de07f8df81598f82b89043b0d971"} Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.457561 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc"} Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.457610 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8bf4381031a4ba3e51287ce7a1b35aea9b03c5bb028d7e40184102e90fe7b677"} Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.466787 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.479703 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.489478 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.491191 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.497198 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vwppb" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.509625 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.510550 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qwq46" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.517714 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9jkcd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.524320 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.527430 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: W0129 06:35:35.528534 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4d3122b_b4b4_41ac_896a_566afdcda936.slice/crio-7098bb65704ee8c1d62c8d6ecce6a7a69d3caebb21dd591b9f9af0777b76c8da WatchSource:0}: Error finding container 7098bb65704ee8c1d62c8d6ecce6a7a69d3caebb21dd591b9f9af0777b76c8da: Status 404 returned error can't find the container with id 7098bb65704ee8c1d62c8d6ecce6a7a69d3caebb21dd591b9f9af0777b76c8da Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.536724 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wqgmk"] Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.537728 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.540701 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.540738 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.541253 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.541411 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.541502 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.543097 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.545489 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.545756 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: W0129 06:35:35.559809 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ae056f0_e054_45da_9638_73074b7c8a3b.slice/crio-a0d43509ee62d408bfe53e1c4db0beaf2662240fd46c3986fbb7768244eca5f5 WatchSource:0}: Error finding container a0d43509ee62d408bfe53e1c4db0beaf2662240fd46c3986fbb7768244eca5f5: Status 404 returned error can't find the container with id a0d43509ee62d408bfe53e1c4db0beaf2662240fd46c3986fbb7768244eca5f5 Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.565877 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.584218 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.594905 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr2h2\" (UniqueName: \"kubernetes.io/projected/02dd5727-894c-4693-9bc7-83dd88ce118c-kube-api-access-tr2h2\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.595822 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-node-log\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.595885 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-etc-openvswitch\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.595909 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-bin\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.595940 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-ovn-kubernetes\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.595996 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-netns\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596018 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-config\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596050 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-kubelet\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596072 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-openvswitch\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596093 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-log-socket\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596114 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-env-overrides\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596135 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-systemd-units\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596193 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-script-lib\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596219 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-netd\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596247 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-slash\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596272 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dd5727-894c-4693-9bc7-83dd88ce118c-ovn-node-metrics-cert\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596329 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-ovn\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596367 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596395 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-systemd\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596427 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-var-lib-openvswitch\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.596755 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.614992 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.628533 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.644584 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.664285 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.676791 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.688902 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.699866 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-netd\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.699897 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-slash\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.699926 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dd5727-894c-4693-9bc7-83dd88ce118c-ovn-node-metrics-cert\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.699943 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-ovn\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.699977 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.699996 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-systemd\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700011 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-var-lib-openvswitch\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700036 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr2h2\" (UniqueName: \"kubernetes.io/projected/02dd5727-894c-4693-9bc7-83dd88ce118c-kube-api-access-tr2h2\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700052 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-node-log\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700075 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-bin\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700092 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-etc-openvswitch\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700118 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-ovn-kubernetes\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700136 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-netns\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700155 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-config\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700173 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-kubelet\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700201 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-openvswitch\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700218 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-log-socket\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700234 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-env-overrides\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700261 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-systemd-units\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.700282 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-script-lib\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701087 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-script-lib\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701137 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-netd\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701160 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-slash\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701518 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-etc-openvswitch\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701551 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-var-lib-openvswitch\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701579 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-ovn\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701604 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701627 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-systemd\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701763 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701860 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-node-log\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701894 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-bin\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701898 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-kubelet\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701921 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-netns\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701947 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-ovn-kubernetes\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.701994 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-log-socket\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.702019 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-openvswitch\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.702048 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-systemd-units\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.702486 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-env-overrides\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.702669 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-config\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.705868 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dd5727-894c-4693-9bc7-83dd88ce118c-ovn-node-metrics-cert\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.719077 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.729586 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr2h2\" (UniqueName: \"kubernetes.io/projected/02dd5727-894c-4693-9bc7-83dd88ce118c-kube-api-access-tr2h2\") pod \"ovnkube-node-wqgmk\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.733304 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.763804 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.800778 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.801047 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:35:37.801013402 +0000 UTC m=+24.175461012 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.825251 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.858462 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.873876 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.903258 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.903402 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.903429 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.903449 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.903475 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903536 5017 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903565 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903577 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903588 5017 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903625 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903635 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903643 5017 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903644 5017 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903578 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:37.903564994 +0000 UTC m=+24.278012604 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903673 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:37.903663657 +0000 UTC m=+24.278111267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903685 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:37.903680317 +0000 UTC m=+24.278127927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:35 crc kubenswrapper[5017]: E0129 06:35:35.903695 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:37.903690407 +0000 UTC m=+24.278138017 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:35 crc kubenswrapper[5017]: W0129 06:35:35.907328 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02dd5727_894c_4693_9bc7_83dd88ce118c.slice/crio-aad2e289f0666cf55714aad4f462a5accef4464aa7945ba6782dec4be64a04e9 WatchSource:0}: Error finding container aad2e289f0666cf55714aad4f462a5accef4464aa7945ba6782dec4be64a04e9: Status 404 returned error can't find the container with id aad2e289f0666cf55714aad4f462a5accef4464aa7945ba6782dec4be64a04e9 Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.936549 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:35 crc kubenswrapper[5017]: I0129 06:35:35.959651 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.000535 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.040946 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.076006 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.118867 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.276080 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:38:22.219529907 +0000 UTC Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.315746 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:36 crc kubenswrapper[5017]: E0129 06:35:36.316318 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.316415 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:36 crc kubenswrapper[5017]: E0129 06:35:36.316478 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.316621 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:36 crc kubenswrapper[5017]: E0129 06:35:36.316693 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.462022 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723" exitCode=0 Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.462093 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.462123 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"aad2e289f0666cf55714aad4f462a5accef4464aa7945ba6782dec4be64a04e9"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.464382 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9jkcd" event={"ID":"8ae056f0-e054-45da-9638-73074b7c8a3b","Type":"ContainerStarted","Data":"17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.464435 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9jkcd" event={"ID":"8ae056f0-e054-45da-9638-73074b7c8a3b","Type":"ContainerStarted","Data":"a0d43509ee62d408bfe53e1c4db0beaf2662240fd46c3986fbb7768244eca5f5"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.466203 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qwq46" event={"ID":"c4d3122b-b4b4-41ac-896a-566afdcda936","Type":"ContainerStarted","Data":"45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.466232 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qwq46" event={"ID":"c4d3122b-b4b4-41ac-896a-566afdcda936","Type":"ContainerStarted","Data":"7098bb65704ee8c1d62c8d6ecce6a7a69d3caebb21dd591b9f9af0777b76c8da"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.468329 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vwppb" event={"ID":"bb597cc2-fff7-4d90-a43e-958791d83324","Type":"ContainerStarted","Data":"f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.468397 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vwppb" event={"ID":"bb597cc2-fff7-4d90-a43e-958791d83324","Type":"ContainerStarted","Data":"af26b8bd189107f6a7b4802176acf82a35837a39aa531aa5b0d2257fb82d3f3a"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.469933 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.469981 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.469990 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"d2a7a5852a519ac194b9033bfbcab6d69dea2c1acb56d2f4282034bbce07d67b"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.471588 5017 generic.go:334] "Generic (PLEG): container finished" podID="4036e581-21bf-4ea0-aaf5-84ab8a841888" containerID="7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df" exitCode=0 Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.472136 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" event={"ID":"4036e581-21bf-4ea0-aaf5-84ab8a841888","Type":"ContainerDied","Data":"7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.472162 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" event={"ID":"4036e581-21bf-4ea0-aaf5-84ab8a841888","Type":"ContainerStarted","Data":"1866ac10a47422c9d76d35ad1a05ace2ad73f789a8f53ffd49d29f548d778247"} Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.484662 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.501374 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.513769 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.528401 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.539868 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.563772 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.577118 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.592031 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.622466 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.643412 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.663386 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.679049 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.695680 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.713547 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.736376 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.774510 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.808409 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.850337 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.879105 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.920822 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.956217 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:36 crc kubenswrapper[5017]: I0129 06:35:36.997158 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.036637 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.075986 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.118558 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.156556 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.194637 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.236354 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.277287 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:28:13.582737252 +0000 UTC Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.479092 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0"} Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.483234 5017 generic.go:334] "Generic (PLEG): container finished" podID="4036e581-21bf-4ea0-aaf5-84ab8a841888" containerID="9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca" exitCode=0 Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.483325 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" event={"ID":"4036e581-21bf-4ea0-aaf5-84ab8a841888","Type":"ContainerDied","Data":"9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca"} Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.487596 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.487640 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.487651 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.487660 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.495174 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.513329 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.528153 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.541144 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.561434 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.581816 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.594372 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.609991 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.628417 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.644345 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.704181 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.722907 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.755594 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.796915 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.822971 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.823237 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:35:41.823190479 +0000 UTC m=+28.197638099 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.838102 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.875835 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.923910 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.924127 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924082 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924179 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924191 5017 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924236 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:41.924222794 +0000 UTC m=+28.298670404 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.924267 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.924298 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924309 5017 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924424 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:41.924417198 +0000 UTC m=+28.298864808 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924363 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924442 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924450 5017 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924480 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:41.92447405 +0000 UTC m=+28.298921660 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924490 5017 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:37 crc kubenswrapper[5017]: E0129 06:35:37.924609 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:41.924579992 +0000 UTC m=+28.299027612 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.925206 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.962441 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:37 crc kubenswrapper[5017]: I0129 06:35:37.997113 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.037257 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.074090 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.118331 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.155753 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.194813 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.248192 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.277690 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:25:43.36636414 +0000 UTC Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.281156 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.315733 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.315794 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.315821 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:38 crc kubenswrapper[5017]: E0129 06:35:38.315875 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:38 crc kubenswrapper[5017]: E0129 06:35:38.315947 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:38 crc kubenswrapper[5017]: E0129 06:35:38.316040 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.326161 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.361559 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.498800 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.498851 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.501203 5017 generic.go:334] "Generic (PLEG): container finished" podID="4036e581-21bf-4ea0-aaf5-84ab8a841888" containerID="d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849" exitCode=0 Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.501253 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" event={"ID":"4036e581-21bf-4ea0-aaf5-84ab8a841888","Type":"ContainerDied","Data":"d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849"} Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.522057 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.539576 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.559783 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.571605 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.586603 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.600580 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.635626 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.680127 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.717668 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.754316 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.792581 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.834316 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.872455 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:38 crc kubenswrapper[5017]: I0129 06:35:38.917169 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.278775 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:28:19.792211963 +0000 UTC Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.507557 5017 generic.go:334] "Generic (PLEG): container finished" podID="4036e581-21bf-4ea0-aaf5-84ab8a841888" containerID="eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8" exitCode=0 Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.507637 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" event={"ID":"4036e581-21bf-4ea0-aaf5-84ab8a841888","Type":"ContainerDied","Data":"eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8"} Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.523054 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.535132 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.550385 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.566491 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.585716 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.603361 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.615600 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.628709 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.647844 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.666092 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.678717 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.694014 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.706013 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.718590 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.959552 5017 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.961498 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.961550 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.961566 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.961650 5017 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.967382 5017 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.967749 5017 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.969074 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.969110 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.969119 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.969140 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.969154 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:39Z","lastTransitionTime":"2026-01-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:39 crc kubenswrapper[5017]: E0129 06:35:39.987352 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.991640 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.991690 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.991705 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.991730 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:39 crc kubenswrapper[5017]: I0129 06:35:39.991746 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:39Z","lastTransitionTime":"2026-01-29T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: E0129 06:35:40.006838 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.011012 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.011082 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.011101 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.011124 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.011137 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: E0129 06:35:40.026000 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.031005 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.031127 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.031222 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.031308 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.031374 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: E0129 06:35:40.043315 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.047009 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.047055 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.047067 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.047084 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.047096 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: E0129 06:35:40.057713 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: E0129 06:35:40.057899 5017 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.059648 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.059715 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.059728 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.059747 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.059757 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.162210 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.162250 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.162263 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.162283 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.162300 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.265308 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.265361 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.265372 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.265392 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.265408 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.279676 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:45:26.175757945 +0000 UTC Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.315644 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.315672 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.315725 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:40 crc kubenswrapper[5017]: E0129 06:35:40.315829 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:40 crc kubenswrapper[5017]: E0129 06:35:40.315930 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:40 crc kubenswrapper[5017]: E0129 06:35:40.316158 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.368321 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.368370 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.368379 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.368395 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.368405 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.471465 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.471520 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.471540 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.471563 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.471582 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.515911 5017 generic.go:334] "Generic (PLEG): container finished" podID="4036e581-21bf-4ea0-aaf5-84ab8a841888" containerID="23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6" exitCode=0 Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.515993 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" event={"ID":"4036e581-21bf-4ea0-aaf5-84ab8a841888","Type":"ContainerDied","Data":"23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.523721 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.531933 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.558655 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.575183 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.575223 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.575234 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.575250 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.575262 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.583419 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.596567 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.611147 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.632676 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.651325 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.666718 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.684339 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.684389 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.684399 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.684416 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.684427 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.684432 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.703575 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.715989 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.727904 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.738538 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.748114 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.786737 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.786818 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.786830 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.786851 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.786866 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.888815 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.888862 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.888870 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.888883 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.888891 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.991921 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.991987 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.991999 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.992016 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:40 crc kubenswrapper[5017]: I0129 06:35:40.992025 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:40Z","lastTransitionTime":"2026-01-29T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.094235 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.094278 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.094289 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.094306 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.094319 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:41Z","lastTransitionTime":"2026-01-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.197129 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.197182 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.197199 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.197222 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.197242 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:41Z","lastTransitionTime":"2026-01-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.280775 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:33:14.773172151 +0000 UTC Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.299775 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.299823 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.299839 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.299858 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.299871 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:41Z","lastTransitionTime":"2026-01-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.402769 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.402815 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.402823 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.402837 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.402846 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:41Z","lastTransitionTime":"2026-01-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.506176 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.506220 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.506232 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.506247 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.506256 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:41Z","lastTransitionTime":"2026-01-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.533475 5017 generic.go:334] "Generic (PLEG): container finished" podID="4036e581-21bf-4ea0-aaf5-84ab8a841888" containerID="8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901" exitCode=0 Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.533558 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" event={"ID":"4036e581-21bf-4ea0-aaf5-84ab8a841888","Type":"ContainerDied","Data":"8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.552555 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.570719 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.590392 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.605880 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.609772 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.609826 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.609843 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.609866 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.609880 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:41Z","lastTransitionTime":"2026-01-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.625461 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.638680 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.651769 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.664945 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.678535 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.689528 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.705124 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.713036 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.713078 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.713092 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.713114 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.713132 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:41Z","lastTransitionTime":"2026-01-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.719997 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.733148 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.760193 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.816002 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.816052 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.816061 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.816078 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.816088 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:41Z","lastTransitionTime":"2026-01-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.863736 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.863936 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:35:49.863916624 +0000 UTC m=+36.238364234 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.867440 5017 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.918266 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.918309 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.918319 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.918333 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.918342 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:41Z","lastTransitionTime":"2026-01-29T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.965240 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.965298 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.965326 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:41 crc kubenswrapper[5017]: I0129 06:35:41.965347 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.965460 5017 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.965515 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:49.965498602 +0000 UTC m=+36.339946222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.965694 5017 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.965826 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:49.965808141 +0000 UTC m=+36.340255741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.965826 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.966001 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.966063 5017 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.966134 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:49.966126368 +0000 UTC m=+36.340573978 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.966632 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.966732 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.966795 5017 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:41 crc kubenswrapper[5017]: E0129 06:35:41.967007 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:35:49.966924287 +0000 UTC m=+36.341371937 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.021267 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.021320 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.021345 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.021365 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.021384 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.124132 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.124171 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.124185 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.124200 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.124213 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.226555 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.226595 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.226608 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.226623 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.226634 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.281600 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:09:48.234601571 +0000 UTC Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.316247 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.316249 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:42 crc kubenswrapper[5017]: E0129 06:35:42.316405 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.316269 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:42 crc kubenswrapper[5017]: E0129 06:35:42.316498 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:42 crc kubenswrapper[5017]: E0129 06:35:42.316544 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.329238 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.329279 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.329287 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.329301 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.329312 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.431749 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.431822 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.431840 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.431866 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.431883 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.533793 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.534199 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.534209 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.534221 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.534231 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.592008 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" event={"ID":"4036e581-21bf-4ea0-aaf5-84ab8a841888","Type":"ContainerStarted","Data":"6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.598335 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.598853 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.598918 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.605939 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.623054 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.625387 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.628369 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.636308 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.636353 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.636370 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.636387 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.636402 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.640412 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.651900 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.668489 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.679721 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.694486 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.707422 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.722130 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.734097 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.739170 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.739234 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.739253 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.739293 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.739312 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.745732 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.755862 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.765733 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.780613 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.797050 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.816016 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.845358 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.845404 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.845411 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.845426 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.845436 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.849198 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.864380 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.877069 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.889163 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.899621 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.923013 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.935107 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.947153 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.948779 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.948821 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.948857 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.948877 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.948889 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:42Z","lastTransitionTime":"2026-01-29T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.958611 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.968898 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.984760 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:42 crc kubenswrapper[5017]: I0129 06:35:42.997677 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.052215 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.052259 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.052302 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.052320 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.052330 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.155653 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.155709 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.155721 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.155739 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.155751 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.258430 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.258475 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.258484 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.258501 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.258512 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.281988 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:21:55.068767435 +0000 UTC Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.361554 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.361596 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.361604 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.361619 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.361629 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.464121 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.464170 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.464179 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.464194 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.464203 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.566649 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.566711 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.566723 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.566737 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.566746 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.601983 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.669047 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.669095 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.669106 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.669128 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.669141 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.734196 5017 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.771373 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.771415 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.771427 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.771442 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.771453 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.874371 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.874408 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.874417 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.874430 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.874438 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.976357 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.976387 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.976395 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.976408 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:43 crc kubenswrapper[5017]: I0129 06:35:43.976416 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:43Z","lastTransitionTime":"2026-01-29T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.078809 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.078877 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.078893 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.078926 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.078944 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:44Z","lastTransitionTime":"2026-01-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.181742 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.181780 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.181793 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.181811 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.181826 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:44Z","lastTransitionTime":"2026-01-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.282757 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:05:21.326437198 +0000 UTC Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.284886 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.284926 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.284937 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.284974 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.284992 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:44Z","lastTransitionTime":"2026-01-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.318775 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:44 crc kubenswrapper[5017]: E0129 06:35:44.318919 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.319280 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:44 crc kubenswrapper[5017]: E0129 06:35:44.319345 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.319393 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:44 crc kubenswrapper[5017]: E0129 06:35:44.319451 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.331365 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.343109 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.356395 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.368463 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.382425 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.387371 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.387421 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.387432 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.387450 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.387468 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:44Z","lastTransitionTime":"2026-01-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.399419 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.415407 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.438433 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.451034 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.460991 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.478628 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.490742 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.490771 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.490782 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.490797 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.490807 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:44Z","lastTransitionTime":"2026-01-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.491932 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.504065 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.521367 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.593116 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.593148 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.593158 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.593171 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.593180 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:44Z","lastTransitionTime":"2026-01-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.605304 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.694993 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.695038 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.695051 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.695072 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.695084 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:44Z","lastTransitionTime":"2026-01-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.797673 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.797724 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.797734 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.797750 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.797760 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:44Z","lastTransitionTime":"2026-01-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.900846 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.901117 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.901350 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.901557 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:44 crc kubenswrapper[5017]: I0129 06:35:44.901740 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:44Z","lastTransitionTime":"2026-01-29T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.005700 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.005747 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.005761 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.005783 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.005798 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.110057 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.110869 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.111156 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.111374 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.111598 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.215726 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.215800 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.215812 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.215839 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.215853 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.283145 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:13:04.522602952 +0000 UTC Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.320198 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.320298 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.320323 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.320355 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.320404 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.423906 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.423967 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.423977 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.423993 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.424004 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.527492 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.527546 5017 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.527655 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.528000 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.528036 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.528052 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.612226 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/0.log" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.617614 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479" exitCode=1 Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.617663 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.618451 5017 scope.go:117] "RemoveContainer" containerID="f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.633851 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.633912 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.633936 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.634020 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.634043 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.650445 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.667361 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.685365 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.703152 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.718603 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.740593 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.740685 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.740716 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.740753 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.740777 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.745119 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.765488 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.790118 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.822725 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:44Z\\\",\\\"message\\\":\\\"35:44.798918 6317 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799007 6317 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799042 6317 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799161 6317 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799236 6317 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:35:44.799350 6317 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799418 6317 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799491 6317 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799883 6317 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.844820 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.845158 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.845198 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.845209 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.845226 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.845237 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.864258 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.883069 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.947158 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.948880 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.948939 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.948993 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.949019 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.949063 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:45Z","lastTransitionTime":"2026-01-29T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:45 crc kubenswrapper[5017]: I0129 06:35:45.967700 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.052866 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.053022 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.053047 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.053127 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.053148 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.155505 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.155560 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.155572 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.155592 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.155604 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.259138 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.259204 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.259225 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.259257 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.259278 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.283716 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:26:57.444311966 +0000 UTC Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.316310 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.316369 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:46 crc kubenswrapper[5017]: E0129 06:35:46.316561 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.316650 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:46 crc kubenswrapper[5017]: E0129 06:35:46.317060 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:46 crc kubenswrapper[5017]: E0129 06:35:46.317112 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.362574 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.362647 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.362667 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.362705 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.362726 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.465490 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.465540 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.465553 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.465576 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.465589 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.525198 5017 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.568058 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.568134 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.568154 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.568178 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.568195 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.626275 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/0.log" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.631121 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.631746 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.655463 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.672213 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.672329 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.672350 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.672377 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.672418 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.674116 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.699977 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:44Z\\\",\\\"message\\\":\\\"35:44.798918 6317 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799007 6317 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799042 6317 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799161 6317 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799236 6317 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:35:44.799350 6317 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799418 6317 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799491 6317 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799883 6317 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.727554 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.743327 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.763990 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.779375 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.779455 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.779475 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.779508 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.779529 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.803823 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.831588 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.862646 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.882054 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.882141 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.882182 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.882189 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.882205 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.882216 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.898991 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.916314 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.930898 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.946890 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.985259 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.985293 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.985302 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.985316 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:46 crc kubenswrapper[5017]: I0129 06:35:46.985325 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:46Z","lastTransitionTime":"2026-01-29T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.088577 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.088631 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.088641 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.088662 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.088671 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:47Z","lastTransitionTime":"2026-01-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.191729 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.191801 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.191831 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.191869 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.191895 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:47Z","lastTransitionTime":"2026-01-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.283950 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:59:46.655989677 +0000 UTC Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.294811 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.294881 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.294889 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.294905 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.294916 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:47Z","lastTransitionTime":"2026-01-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.398438 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.398513 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.398532 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.398562 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.398585 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:47Z","lastTransitionTime":"2026-01-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.501678 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.501754 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.501779 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.501812 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.501837 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:47Z","lastTransitionTime":"2026-01-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.606224 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.606335 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.606355 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.606383 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.606405 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:47Z","lastTransitionTime":"2026-01-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.639817 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/1.log" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.641095 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/0.log" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.646693 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05" exitCode=1 Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.646761 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.646821 5017 scope.go:117] "RemoveContainer" containerID="f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.648716 5017 scope.go:117] "RemoveContainer" containerID="7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05" Jan 29 06:35:47 crc kubenswrapper[5017]: E0129 06:35:47.649132 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.670333 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.688890 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.706986 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.709375 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.709406 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.709415 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.709432 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.709447 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:47Z","lastTransitionTime":"2026-01-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.720602 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.740015 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.763283 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.784483 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.799506 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.811603 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.811703 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.811746 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.811763 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.811775 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:47Z","lastTransitionTime":"2026-01-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.813516 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.832479 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.847273 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.873281 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.889880 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.914086 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.914119 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.914127 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.914140 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.914149 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:47Z","lastTransitionTime":"2026-01-29T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:47 crc kubenswrapper[5017]: I0129 06:35:47.920342 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:44Z\\\",\\\"message\\\":\\\"35:44.798918 6317 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799007 6317 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799042 6317 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799161 6317 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799236 6317 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:35:44.799350 6317 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799418 6317 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799491 6317 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799883 6317 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:46Z\\\",\\\"message\\\":\\\"I0129 06:35:46.669016 6436 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:35:46.669192 6436 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 06:35:46.669219 6436 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 06:35:46.669254 6436 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 06:35:46.669275 6436 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 06:35:46.670555 6436 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:35:46.670651 6436 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:35:46.671045 6436 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:35:46.671114 6436 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:35:46.671437 6436 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 06:35:46.671457 6436 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 06:35:46.671479 6436 factory.go:656] Stopping watch factory\\\\nI0129 06:35:46.671478 6436 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 06:35:46.671467 6436 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:35:46.671499 6436 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 06:35:46.671500 6436 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.018178 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.018270 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.018298 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.018334 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.018354 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.050418 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5"] Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.051267 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.053751 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.054474 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.081092 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.096138 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.114264 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.121534 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.121587 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.121606 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.121631 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.121647 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.127130 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.145477 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.157480 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.166306 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42db11f6-649f-486e-83a2-7506fdf51ba2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.166343 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42db11f6-649f-486e-83a2-7506fdf51ba2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.166359 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42db11f6-649f-486e-83a2-7506fdf51ba2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.166385 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm9zl\" (UniqueName: \"kubernetes.io/projected/42db11f6-649f-486e-83a2-7506fdf51ba2-kube-api-access-qm9zl\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.169074 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.183294 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.195256 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.211562 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.224729 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.224803 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.224824 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.224855 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.224879 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.231351 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.260230 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.267344 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm9zl\" (UniqueName: \"kubernetes.io/projected/42db11f6-649f-486e-83a2-7506fdf51ba2-kube-api-access-qm9zl\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.267464 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42db11f6-649f-486e-83a2-7506fdf51ba2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.267513 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42db11f6-649f-486e-83a2-7506fdf51ba2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.267549 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42db11f6-649f-486e-83a2-7506fdf51ba2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.268749 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42db11f6-649f-486e-83a2-7506fdf51ba2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.268915 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42db11f6-649f-486e-83a2-7506fdf51ba2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.275285 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42db11f6-649f-486e-83a2-7506fdf51ba2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.285107 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:36:10.15054202 +0000 UTC Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.288659 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:44Z\\\",\\\"message\\\":\\\"35:44.798918 6317 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799007 6317 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799042 6317 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799161 6317 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799236 6317 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:35:44.799350 6317 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799418 6317 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799491 6317 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799883 6317 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:46Z\\\",\\\"message\\\":\\\"I0129 06:35:46.669016 6436 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:35:46.669192 6436 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 06:35:46.669219 6436 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 06:35:46.669254 6436 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 06:35:46.669275 6436 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 06:35:46.670555 6436 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:35:46.670651 6436 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:35:46.671045 6436 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:35:46.671114 6436 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:35:46.671437 6436 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 06:35:46.671457 6436 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 06:35:46.671479 6436 factory.go:656] Stopping watch factory\\\\nI0129 06:35:46.671478 6436 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 06:35:46.671467 6436 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:35:46.671499 6436 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 06:35:46.671500 6436 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.291759 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm9zl\" (UniqueName: \"kubernetes.io/projected/42db11f6-649f-486e-83a2-7506fdf51ba2-kube-api-access-qm9zl\") pod \"ovnkube-control-plane-749d76644c-46ch5\" (UID: \"42db11f6-649f-486e-83a2-7506fdf51ba2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.315807 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.315909 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:48 crc kubenswrapper[5017]: E0129 06:35:48.316023 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.316044 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:48 crc kubenswrapper[5017]: E0129 06:35:48.316207 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:48 crc kubenswrapper[5017]: E0129 06:35:48.316488 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.317012 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.330641 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.330703 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.330722 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.330743 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.330757 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.339036 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.374346 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.435438 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.435487 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.435498 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.435515 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.435551 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.538002 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.538033 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.538042 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.538057 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.538066 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.642605 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.642816 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.642836 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.642892 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.642906 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.655329 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" event={"ID":"42db11f6-649f-486e-83a2-7506fdf51ba2","Type":"ContainerStarted","Data":"9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.655433 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" event={"ID":"42db11f6-649f-486e-83a2-7506fdf51ba2","Type":"ContainerStarted","Data":"e0e56eaa37ebbccdbf0f1cdf6979a8abf272fb0e6e769c5c008284b060544366"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.659770 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/1.log" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.746560 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.746631 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.746651 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.746682 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.746708 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.850269 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.850324 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.850339 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.850362 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.850378 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.953886 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.953997 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.954022 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.954054 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:48 crc kubenswrapper[5017]: I0129 06:35:48.954073 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:48Z","lastTransitionTime":"2026-01-29T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.057319 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.057371 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.057381 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.057401 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.057412 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.161621 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.161677 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.161688 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.161716 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.161726 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.184397 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xn4bq"] Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.185308 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.185440 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.202775 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.215525 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.237549 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.254147 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.265915 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.266060 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.266086 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.266121 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.266145 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.275945 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.281852 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgkx6\" (UniqueName: \"kubernetes.io/projected/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-kube-api-access-tgkx6\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.281934 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.285677 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:36:50.805064903 +0000 UTC Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.299310 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.323785 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:44Z\\\",\\\"message\\\":\\\"35:44.798918 6317 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799007 6317 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799042 6317 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799161 6317 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799236 6317 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:35:44.799350 6317 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799418 6317 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799491 6317 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799883 6317 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:46Z\\\",\\\"message\\\":\\\"I0129 06:35:46.669016 6436 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:35:46.669192 6436 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 06:35:46.669219 6436 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 06:35:46.669254 6436 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 06:35:46.669275 6436 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 06:35:46.670555 6436 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:35:46.670651 6436 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:35:46.671045 6436 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:35:46.671114 6436 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:35:46.671437 6436 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 06:35:46.671457 6436 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 06:35:46.671479 6436 factory.go:656] Stopping watch factory\\\\nI0129 06:35:46.671478 6436 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 06:35:46.671467 6436 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:35:46.671499 6436 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 06:35:46.671500 6436 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.346318 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.367370 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.370194 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.370254 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.370277 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.370307 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.370325 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.382719 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgkx6\" (UniqueName: \"kubernetes.io/projected/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-kube-api-access-tgkx6\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.382784 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.383063 5017 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.383154 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs podName:0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f nodeName:}" failed. No retries permitted until 2026-01-29 06:35:49.883131521 +0000 UTC m=+36.257579161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs") pod "network-metrics-daemon-xn4bq" (UID: "0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.383879 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.400701 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.408529 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgkx6\" (UniqueName: \"kubernetes.io/projected/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-kube-api-access-tgkx6\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.419550 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.435852 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.451131 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.464503 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.472811 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.472856 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.472869 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.472888 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.472902 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.476483 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.576009 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.576071 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.576091 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.576123 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.576146 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.679258 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.679653 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.679667 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.679690 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.679705 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.679256 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" event={"ID":"42db11f6-649f-486e-83a2-7506fdf51ba2","Type":"ContainerStarted","Data":"9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.707094 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.742043 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:44Z\\\",\\\"message\\\":\\\"35:44.798918 6317 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799007 6317 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799042 6317 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799161 6317 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799236 6317 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:35:44.799350 6317 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799418 6317 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799491 6317 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799883 6317 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:46Z\\\",\\\"message\\\":\\\"I0129 06:35:46.669016 6436 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:35:46.669192 6436 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 06:35:46.669219 6436 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 06:35:46.669254 6436 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 06:35:46.669275 6436 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 06:35:46.670555 6436 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:35:46.670651 6436 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:35:46.671045 6436 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:35:46.671114 6436 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:35:46.671437 6436 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 06:35:46.671457 6436 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 06:35:46.671479 6436 factory.go:656] Stopping watch factory\\\\nI0129 06:35:46.671478 6436 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 06:35:46.671467 6436 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:35:46.671499 6436 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 06:35:46.671500 6436 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.764704 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.783167 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.784807 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.784886 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.784910 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.784941 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.784986 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.800872 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.823224 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.838685 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.855889 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.878084 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.888535 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.889078 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.889154 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.889222 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.889270 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.889252 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.889744 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:36:05.889597521 +0000 UTC m=+52.264045171 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.890137 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.890843 5017 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.891477 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs podName:0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f nodeName:}" failed. No retries permitted until 2026-01-29 06:35:50.89124191 +0000 UTC m=+37.265689550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs") pod "network-metrics-daemon-xn4bq" (UID: "0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.905612 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.925357 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.947572 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.976782 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.993616 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.993685 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.993719 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.993774 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.993904 5017 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.993938 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.993999 5017 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.994005 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.994115 5017 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.994013 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.994237 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.994275 5017 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.994085 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:36:05.994039738 +0000 UTC m=+52.368487518 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.994337 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:36:05.994305925 +0000 UTC m=+52.368753555 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.994366 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:36:05.994354816 +0000 UTC m=+52.368802436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:49 crc kubenswrapper[5017]: E0129 06:35:49.994400 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:36:05.994377307 +0000 UTC m=+52.368824927 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.996607 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.996660 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.996679 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.996706 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.996727 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:49Z","lastTransitionTime":"2026-01-29T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:49 crc kubenswrapper[5017]: I0129 06:35:49.996803 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:49Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.013306 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.029284 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.099481 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.099526 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.099539 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.099560 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.099574 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.190004 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.190054 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.190066 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.190086 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.190100 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.213460 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.218277 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.218325 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.218341 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.218382 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.218398 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.233225 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.239510 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.239597 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.239618 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.239674 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.239694 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.259307 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.263838 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.263917 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.263936 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.264001 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.264025 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.281068 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.285800 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.285834 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.285846 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.285865 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.285879 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.285897 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:09:28.44743848 +0000 UTC Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.302494 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.302652 5017 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.306693 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.306764 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.306786 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.306818 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.306838 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.315599 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.315618 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.315720 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.315757 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.316074 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.316261 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.316497 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.316622 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.410694 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.410770 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.410788 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.410817 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.410837 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.514393 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.514471 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.514498 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.514532 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.514560 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.620662 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.620740 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.620760 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.620790 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.620813 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.724660 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.724748 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.724768 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.724799 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.724827 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.829025 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.829103 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.829123 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.829158 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.829180 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.907793 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.908107 5017 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:50 crc kubenswrapper[5017]: E0129 06:35:50.908291 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs podName:0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f nodeName:}" failed. No retries permitted until 2026-01-29 06:35:52.908215285 +0000 UTC m=+39.282662925 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs") pod "network-metrics-daemon-xn4bq" (UID: "0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.933787 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.933849 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.933860 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.933883 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:50 crc kubenswrapper[5017]: I0129 06:35:50.933895 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:50Z","lastTransitionTime":"2026-01-29T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.037120 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.037198 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.037218 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.037248 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.037267 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.140742 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.140837 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.140858 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.140894 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.140922 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.245533 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.245590 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.245607 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.245634 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.245652 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.287129 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:25:46.267047506 +0000 UTC Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.350304 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.350376 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.350394 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.350422 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.350442 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.454830 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.454915 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.454941 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.455027 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.455055 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.558751 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.558839 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.558863 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.558895 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.558917 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.663053 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.663109 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.663122 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.663147 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.663160 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.766430 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.766524 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.766551 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.766587 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.766612 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.870026 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.870176 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.870190 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.870208 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.870222 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.973623 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.973694 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.973713 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.973741 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:51 crc kubenswrapper[5017]: I0129 06:35:51.973763 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:51Z","lastTransitionTime":"2026-01-29T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.076696 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.076771 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.076790 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.076823 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.076846 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:52Z","lastTransitionTime":"2026-01-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.181015 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.181097 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.181116 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.181145 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.181163 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:52Z","lastTransitionTime":"2026-01-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.284344 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.284427 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.284451 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.284477 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.284493 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:52Z","lastTransitionTime":"2026-01-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.287861 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:00:40.824570547 +0000 UTC Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.315401 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.315501 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.315833 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:52 crc kubenswrapper[5017]: E0129 06:35:52.315836 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:52 crc kubenswrapper[5017]: E0129 06:35:52.316015 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:52 crc kubenswrapper[5017]: E0129 06:35:52.316150 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.316241 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:52 crc kubenswrapper[5017]: E0129 06:35:52.316548 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.389105 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.389175 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.389193 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.389220 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.389243 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:52Z","lastTransitionTime":"2026-01-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.494576 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.494773 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.494803 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.494877 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.494945 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:52Z","lastTransitionTime":"2026-01-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.629631 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.629705 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.629724 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.629755 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.629775 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:52Z","lastTransitionTime":"2026-01-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.733337 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.733422 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.733450 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.733483 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.733505 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:52Z","lastTransitionTime":"2026-01-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.838402 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.838488 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.838514 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.838590 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.838620 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:52Z","lastTransitionTime":"2026-01-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.932376 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:52 crc kubenswrapper[5017]: E0129 06:35:52.932550 5017 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:52 crc kubenswrapper[5017]: E0129 06:35:52.932622 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs podName:0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f nodeName:}" failed. No retries permitted until 2026-01-29 06:35:56.932604304 +0000 UTC m=+43.307051924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs") pod "network-metrics-daemon-xn4bq" (UID: "0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.942106 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.942195 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.942218 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.942248 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:52 crc kubenswrapper[5017]: I0129 06:35:52.942267 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:52Z","lastTransitionTime":"2026-01-29T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.044952 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.045089 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.045107 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.045138 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.045159 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.148224 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.148300 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.148325 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.148350 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.148364 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.251459 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.251533 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.251552 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.251581 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.251601 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.289032 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:08:18.527272849 +0000 UTC Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.319859 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.340707 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.354933 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.355162 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.355191 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.355264 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.355284 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.358552 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.374556 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.392030 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.408599 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.433757 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.455232 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.458578 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.458645 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.458666 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.458696 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.458719 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.472147 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.482621 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.497710 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.511697 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.532209 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.547265 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.561410 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.561458 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.561470 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.561486 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.561499 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.568600 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.591400 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.624364 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:44Z\\\",\\\"message\\\":\\\"35:44.798918 6317 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799007 6317 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799042 6317 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799161 6317 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799236 6317 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:35:44.799350 6317 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799418 6317 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799491 6317 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799883 6317 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:46Z\\\",\\\"message\\\":\\\"I0129 06:35:46.669016 6436 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:35:46.669192 6436 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 06:35:46.669219 6436 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 06:35:46.669254 6436 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 06:35:46.669275 6436 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 06:35:46.670555 6436 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:35:46.670651 6436 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:35:46.671045 6436 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:35:46.671114 6436 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:35:46.671437 6436 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 06:35:46.671457 6436 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 06:35:46.671479 6436 factory.go:656] Stopping watch factory\\\\nI0129 06:35:46.671478 6436 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 06:35:46.671467 6436 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:35:46.671499 6436 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 06:35:46.671500 6436 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.664067 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.664123 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.664137 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.664154 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.664166 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.766597 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.766656 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.766671 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.766692 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.766707 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.869759 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.869856 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.869876 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.869911 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.869933 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.972732 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.972780 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.972793 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.972815 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:53 crc kubenswrapper[5017]: I0129 06:35:53.972828 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:53Z","lastTransitionTime":"2026-01-29T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.076035 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.076086 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.076100 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.076121 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.076135 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:54Z","lastTransitionTime":"2026-01-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.178948 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.179033 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.179051 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.179081 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.179104 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:54Z","lastTransitionTime":"2026-01-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.282734 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.282811 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.282833 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.282864 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.282886 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:54Z","lastTransitionTime":"2026-01-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.289945 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:56:55.367314091 +0000 UTC Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.315615 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:54 crc kubenswrapper[5017]: E0129 06:35:54.315806 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.316302 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.316507 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.316695 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:54 crc kubenswrapper[5017]: E0129 06:35:54.316672 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:35:54 crc kubenswrapper[5017]: E0129 06:35:54.316908 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:54 crc kubenswrapper[5017]: E0129 06:35:54.317042 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.349831 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.373364 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.386944 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.387003 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.387016 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.387036 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.387050 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:54Z","lastTransitionTime":"2026-01-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.417302 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f74734e0ae29164b6b6f388e56c4bed32978ea65cdf79267820ec237eb300479\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:44Z\\\",\\\"message\\\":\\\"35:44.798918 6317 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799007 6317 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799042 6317 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799161 6317 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799236 6317 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:35:44.799350 6317 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 06:35:44.799418 6317 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799491 6317 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:35:44.799883 6317 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:46Z\\\",\\\"message\\\":\\\"I0129 06:35:46.669016 6436 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:35:46.669192 6436 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 06:35:46.669219 6436 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 06:35:46.669254 6436 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 06:35:46.669275 6436 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 06:35:46.670555 6436 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:35:46.670651 6436 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:35:46.671045 6436 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:35:46.671114 6436 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:35:46.671437 6436 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 06:35:46.671457 6436 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 06:35:46.671479 6436 factory.go:656] Stopping watch factory\\\\nI0129 06:35:46.671478 6436 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 06:35:46.671467 6436 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:35:46.671499 6436 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 06:35:46.671500 6436 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.440021 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.461424 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.490369 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.490424 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.490439 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.490463 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.490483 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:54Z","lastTransitionTime":"2026-01-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.495644 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.523708 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.543279 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.557662 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.569492 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.584176 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.592125 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.592169 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.592180 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.592195 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.592205 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:54Z","lastTransitionTime":"2026-01-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.598902 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.612335 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.624775 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.640076 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.654003 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.695224 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.695304 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.695323 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.695350 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.695371 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:54Z","lastTransitionTime":"2026-01-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.797822 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.797880 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.797891 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.797909 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.797922 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:54Z","lastTransitionTime":"2026-01-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.901838 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.901927 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.901952 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.902043 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:54 crc kubenswrapper[5017]: I0129 06:35:54.902063 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:54Z","lastTransitionTime":"2026-01-29T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.005056 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.005104 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.005116 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.005132 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.005145 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.107510 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.107548 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.107556 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.107570 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.107580 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.209776 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.209821 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.209836 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.209850 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.209859 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.291213 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:47:17.555425972 +0000 UTC Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.312863 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.312923 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.312938 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.312973 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.312988 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.416791 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.416847 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.416857 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.416873 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.416884 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.519221 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.519271 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.519283 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.519298 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.519308 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.621856 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.622152 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.622289 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.622400 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.622418 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.725061 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.725107 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.725117 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.725133 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.725146 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.828716 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.828779 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.828796 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.828822 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.828845 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.931926 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.932017 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.932037 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.932062 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:55 crc kubenswrapper[5017]: I0129 06:35:55.932080 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:55Z","lastTransitionTime":"2026-01-29T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.035250 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.035344 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.035367 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.035394 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.035412 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.139190 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.139648 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.139864 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.140114 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.140319 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.243775 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.243871 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.243904 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.243943 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.244025 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.291720 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:54:19.262562751 +0000 UTC Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.315620 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.315732 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.315747 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.315645 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:56 crc kubenswrapper[5017]: E0129 06:35:56.315856 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:56 crc kubenswrapper[5017]: E0129 06:35:56.316043 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:56 crc kubenswrapper[5017]: E0129 06:35:56.316295 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:35:56 crc kubenswrapper[5017]: E0129 06:35:56.316402 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.347224 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.347315 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.347343 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.347381 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.347409 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.451008 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.451094 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.451118 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.451150 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.451172 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.554122 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.554193 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.554207 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.554228 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.554243 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.658506 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.658555 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.658565 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.658581 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.658594 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.761852 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.761933 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.761983 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.762015 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.762037 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.865662 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.865712 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.865727 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.865753 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.865769 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.969547 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.969616 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.969635 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.969665 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.969687 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:56Z","lastTransitionTime":"2026-01-29T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:56 crc kubenswrapper[5017]: I0129 06:35:56.979468 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:56 crc kubenswrapper[5017]: E0129 06:35:56.979656 5017 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:56 crc kubenswrapper[5017]: E0129 06:35:56.979770 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs podName:0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f nodeName:}" failed. No retries permitted until 2026-01-29 06:36:04.979739325 +0000 UTC m=+51.354186965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs") pod "network-metrics-daemon-xn4bq" (UID: "0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.073294 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.073349 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.073360 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.073385 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.073397 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:57Z","lastTransitionTime":"2026-01-29T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.175720 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.175787 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.175803 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.175822 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.175834 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:57Z","lastTransitionTime":"2026-01-29T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.279281 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.279334 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.279351 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.279376 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.279397 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:57Z","lastTransitionTime":"2026-01-29T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.292024 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:57:02.938061099 +0000 UTC Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.381757 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.381825 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.381841 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.381867 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.381887 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:57Z","lastTransitionTime":"2026-01-29T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.485446 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.485580 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.485610 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.485645 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.485664 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:57Z","lastTransitionTime":"2026-01-29T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.589216 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.589316 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.589377 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.589414 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.589438 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:57Z","lastTransitionTime":"2026-01-29T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.692832 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.692908 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.692927 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.692983 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.693003 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:57Z","lastTransitionTime":"2026-01-29T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.796721 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.796804 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.796828 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.796860 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.796883 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:57Z","lastTransitionTime":"2026-01-29T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.900291 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.900372 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.900397 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.900430 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:57 crc kubenswrapper[5017]: I0129 06:35:57.900457 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:57Z","lastTransitionTime":"2026-01-29T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.003319 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.003373 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.003383 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.003400 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.003411 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.106674 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.106756 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.106778 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.106810 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.106833 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.210255 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.210316 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.210329 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.210354 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.210374 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.293162 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:57:41.993760073 +0000 UTC Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.313447 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.313501 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.313519 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.313549 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.313570 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.316132 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.316157 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.316202 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:35:58 crc kubenswrapper[5017]: E0129 06:35:58.316260 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:35:58 crc kubenswrapper[5017]: E0129 06:35:58.316411 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.316444 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:35:58 crc kubenswrapper[5017]: E0129 06:35:58.316837 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:35:58 crc kubenswrapper[5017]: E0129 06:35:58.316980 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.317149 5017 scope.go:117] "RemoveContainer" containerID="7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.332381 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.344692 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.367906 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.377483 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.389575 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.401630 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.418378 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.418443 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.418466 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.418537 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.418551 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.528820 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.528567 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:46Z\\\",\\\"message\\\":\\\"I0129 06:35:46.669016 6436 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:35:46.669192 6436 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 06:35:46.669219 6436 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 06:35:46.669254 6436 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 06:35:46.669275 6436 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 06:35:46.670555 6436 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:35:46.670651 6436 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:35:46.671045 6436 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:35:46.671114 6436 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:35:46.671437 6436 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 06:35:46.671457 6436 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 06:35:46.671479 6436 factory.go:656] Stopping watch factory\\\\nI0129 06:35:46.671478 6436 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 06:35:46.671467 6436 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:35:46.671499 6436 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 06:35:46.671500 6436 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.531885 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.531919 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.531936 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.531997 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.532024 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.554209 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.571228 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.586594 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.600819 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.620539 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.635202 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.637567 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.637596 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.637607 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.637624 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.637637 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.653932 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.676684 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.693422 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.720348 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/1.log" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.722752 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.723481 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.741011 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.741216 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.741331 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.741348 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.741370 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.741380 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.752309 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.770995 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.786110 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.806751 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.829717 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.844148 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.844191 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.844203 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.844218 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.844227 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.855044 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:46Z\\\",\\\"message\\\":\\\"I0129 06:35:46.669016 6436 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:35:46.669192 6436 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 06:35:46.669219 6436 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 06:35:46.669254 6436 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 06:35:46.669275 6436 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 06:35:46.670555 6436 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:35:46.670651 6436 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:35:46.671045 6436 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:35:46.671114 6436 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:35:46.671437 6436 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 06:35:46.671457 6436 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 06:35:46.671479 6436 factory.go:656] Stopping watch factory\\\\nI0129 06:35:46.671478 6436 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 06:35:46.671467 6436 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:35:46.671499 6436 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 06:35:46.671500 6436 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.869244 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.880787 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.900684 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.920020 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.939137 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.946859 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.946935 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.946978 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.947010 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.947032 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:58Z","lastTransitionTime":"2026-01-29T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.962100 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:58 crc kubenswrapper[5017]: I0129 06:35:58.983929 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.000085 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.011460 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.050341 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.050376 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.050387 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.050404 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.050413 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.152148 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.152188 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.152199 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.152216 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.152229 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.254663 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.254705 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.254730 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.254747 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.254756 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.294084 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:46:32.395345899 +0000 UTC Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.357336 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.357399 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.357417 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.357487 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.357505 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.460224 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.460303 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.460322 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.460353 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.460371 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.564743 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.564800 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.564810 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.564828 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.564840 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.668847 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.668937 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.669053 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.669118 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.669154 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.730627 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/2.log" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.731420 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/1.log" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.735289 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f" exitCode=1 Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.735380 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.735454 5017 scope.go:117] "RemoveContainer" containerID="7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.736867 5017 scope.go:117] "RemoveContainer" containerID="dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f" Jan 29 06:35:59 crc kubenswrapper[5017]: E0129 06:35:59.737262 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.773257 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.777179 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.777257 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.777276 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.777306 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.777323 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.793034 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.827316 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e3fa5697d642a564e7ae639a013ea62bdddd3e84a927e14822c640ecf6caa05\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:46Z\\\",\\\"message\\\":\\\"I0129 06:35:46.669016 6436 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:35:46.669192 6436 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 06:35:46.669219 6436 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 06:35:46.669254 6436 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 06:35:46.669275 6436 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 06:35:46.670555 6436 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:35:46.670651 6436 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:35:46.671045 6436 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:35:46.671114 6436 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:35:46.671437 6436 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 06:35:46.671457 6436 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 06:35:46.671479 6436 factory.go:656] Stopping watch factory\\\\nI0129 06:35:46.671478 6436 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 06:35:46.671467 6436 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:35:46.671499 6436 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 06:35:46.671500 6436 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.847093 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.866451 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.880561 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.880706 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.880742 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.880778 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.880809 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.885380 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.911556 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.933641 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.952536 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.971823 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.985295 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.985394 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.985412 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.985443 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.985462 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:35:59Z","lastTransitionTime":"2026-01-29T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:35:59 crc kubenswrapper[5017]: I0129 06:35:59.992573 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:35:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.010370 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.030868 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.048280 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.074642 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.089181 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.089249 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.089267 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.089297 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.089317 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.092741 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.192670 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.192755 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.192774 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.192808 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.192827 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.295458 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:29:45.620658126 +0000 UTC Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.296223 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.296306 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.296326 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.296352 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.296369 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.316244 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.316317 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.316364 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.316278 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.316476 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.316642 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.316833 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.317014 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.400515 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.400602 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.400628 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.400664 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.400688 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.503822 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.503910 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.503934 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.504006 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.504042 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.505828 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.505917 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.505938 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.506004 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.506032 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.534022 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.540549 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.540609 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.540619 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.540636 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.540649 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.560135 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.564685 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.564782 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.564804 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.564848 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.564871 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.585678 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.590527 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.590587 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.590599 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.590620 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.590636 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.605400 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.610769 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.610828 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.610849 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.610878 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.610903 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.630628 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.630803 5017 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.633036 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.633069 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.633080 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.633100 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.633114 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.739662 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.739752 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.739773 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.739808 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.739830 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.742171 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/2.log" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.746988 5017 scope.go:117] "RemoveContainer" containerID="dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f" Jan 29 06:36:00 crc kubenswrapper[5017]: E0129 06:36:00.747264 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.764312 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.785842 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.804594 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.818523 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.830931 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.842465 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.842524 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.842547 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.842579 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.842601 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.846079 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.872347 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.910851 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.931082 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.945766 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.945876 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.945902 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.945943 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.946022 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:00Z","lastTransitionTime":"2026-01-29T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.949758 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.963401 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.978912 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:00 crc kubenswrapper[5017]: I0129 06:36:00.993147 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.011593 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.030290 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.050118 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.050190 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.050204 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.050226 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.050241 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.054075 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.154929 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.155047 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.155075 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.155113 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.155139 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.257948 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.258023 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.258035 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.258051 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.258065 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.296007 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:22:43.566602596 +0000 UTC Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.361867 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.361944 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.361999 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.362028 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.362048 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.466328 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.466394 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.466413 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.466440 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.466461 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.570144 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.570210 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.570228 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.570254 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.570273 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.674242 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.674316 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.674337 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.674364 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.674384 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.778270 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.778381 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.778451 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.778485 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.778510 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.882027 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.882087 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.882102 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.882122 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.882134 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.986162 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.986242 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.986268 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.986296 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:01 crc kubenswrapper[5017]: I0129 06:36:01.986315 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:01Z","lastTransitionTime":"2026-01-29T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.089252 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.089313 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.089331 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.089357 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.089375 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:02Z","lastTransitionTime":"2026-01-29T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.192591 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.192630 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.192639 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.192659 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.192673 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:02Z","lastTransitionTime":"2026-01-29T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.296388 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:52:27.900048091 +0000 UTC Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.297463 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.297517 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.297527 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.297545 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.297557 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:02Z","lastTransitionTime":"2026-01-29T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.315946 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.315992 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:02 crc kubenswrapper[5017]: E0129 06:36:02.316088 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.316104 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:02 crc kubenswrapper[5017]: E0129 06:36:02.316196 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.316271 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:02 crc kubenswrapper[5017]: E0129 06:36:02.316388 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:02 crc kubenswrapper[5017]: E0129 06:36:02.316650 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.400108 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.400167 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.400178 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.400194 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.400229 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:02Z","lastTransitionTime":"2026-01-29T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.503599 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.503672 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.503690 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.503719 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.503743 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:02Z","lastTransitionTime":"2026-01-29T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.606859 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.606934 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.607052 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.607095 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.607130 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:02Z","lastTransitionTime":"2026-01-29T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.711475 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.711571 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.711592 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.711625 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.711646 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:02Z","lastTransitionTime":"2026-01-29T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.815656 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.815736 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.815755 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.815780 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.815799 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:02Z","lastTransitionTime":"2026-01-29T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.920137 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.920202 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.920219 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.920246 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:02 crc kubenswrapper[5017]: I0129 06:36:02.920265 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:02Z","lastTransitionTime":"2026-01-29T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.047093 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.047192 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.047219 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.047253 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.047276 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.150364 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.150446 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.150466 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.150495 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.150514 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.253754 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.253798 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.253808 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.253825 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.253834 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.296934 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:46:29.869609726 +0000 UTC Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.358264 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.358320 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.358333 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.358353 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.358365 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.461113 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.461158 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.461172 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.461192 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.461210 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.565382 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.565446 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.565469 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.565496 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.565517 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.669005 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.669078 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.669097 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.669132 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.669150 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.771446 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.771503 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.771519 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.771545 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.771561 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.874630 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.874736 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.874763 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.874801 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.874824 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.978137 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.978236 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.978263 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.978298 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:03 crc kubenswrapper[5017]: I0129 06:36:03.978321 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:03Z","lastTransitionTime":"2026-01-29T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.083062 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.083138 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.083165 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.083218 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.083248 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:04Z","lastTransitionTime":"2026-01-29T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.187362 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.187450 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.187476 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.187509 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.187543 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:04Z","lastTransitionTime":"2026-01-29T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.291298 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.291385 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.291405 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.291437 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.291460 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:04Z","lastTransitionTime":"2026-01-29T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.297582 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:34:45.711255377 +0000 UTC Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.315555 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.315582 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.315675 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:04 crc kubenswrapper[5017]: E0129 06:36:04.315770 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.315825 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:04 crc kubenswrapper[5017]: E0129 06:36:04.316029 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:04 crc kubenswrapper[5017]: E0129 06:36:04.316191 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:04 crc kubenswrapper[5017]: E0129 06:36:04.316337 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.336837 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.357665 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.389193 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.396027 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.396098 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.396124 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.396160 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.396186 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:04Z","lastTransitionTime":"2026-01-29T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.409394 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.432365 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.452562 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.473480 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.495684 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.500669 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.500723 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.500741 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.500769 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.500791 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:04Z","lastTransitionTime":"2026-01-29T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.518059 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.540325 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.560544 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.580455 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.596168 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.603783 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.603838 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.603857 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.603883 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.603899 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:04Z","lastTransitionTime":"2026-01-29T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.615352 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.633371 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.651576 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.708188 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.708249 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.708267 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.708294 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.708311 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:04Z","lastTransitionTime":"2026-01-29T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.812465 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.812537 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.812556 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.812581 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.812601 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:04Z","lastTransitionTime":"2026-01-29T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.915673 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.915733 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.915745 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.915764 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:04 crc kubenswrapper[5017]: I0129 06:36:04.915778 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:04Z","lastTransitionTime":"2026-01-29T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.011871 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:05 crc kubenswrapper[5017]: E0129 06:36:05.012310 5017 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:36:05 crc kubenswrapper[5017]: E0129 06:36:05.012493 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs podName:0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f nodeName:}" failed. No retries permitted until 2026-01-29 06:36:21.012451703 +0000 UTC m=+67.386899523 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs") pod "network-metrics-daemon-xn4bq" (UID: "0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.019508 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.019616 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.019638 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.019708 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.019730 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.122924 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.123084 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.123111 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.123147 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.123170 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.226647 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.226702 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.226714 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.226734 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.226745 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.298573 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:05:55.130392864 +0000 UTC Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.330275 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.330369 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.330399 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.330442 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.330472 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.434496 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.434555 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.434575 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.434601 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.434619 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.538083 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.538139 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.538157 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.538182 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.538200 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.641585 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.641642 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.641653 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.641672 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.641686 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.746279 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.746358 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.746378 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.746408 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.746432 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.849785 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.849871 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.849908 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.849949 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.850024 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.923255 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:36:05 crc kubenswrapper[5017]: E0129 06:36:05.923493 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:36:37.923443053 +0000 UTC m=+84.297890703 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.953504 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.953574 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.953595 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.953627 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:05 crc kubenswrapper[5017]: I0129 06:36:05.953649 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:05Z","lastTransitionTime":"2026-01-29T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.024671 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.024758 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.024841 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.024925 5017 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.025088 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:36:38.025053773 +0000 UTC m=+84.399501433 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.024933 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.025157 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.025194 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.025196 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.025219 5017 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.025239 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.025262 5017 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.025309 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:36:38.025291918 +0000 UTC m=+84.399739568 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.025343 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:36:38.025328479 +0000 UTC m=+84.399776119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.026157 5017 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.026319 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:36:38.026285093 +0000 UTC m=+84.400732743 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.056925 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.056995 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.057004 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.057022 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.057041 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.161232 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.161782 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.161796 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.161819 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.161832 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.265703 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.265789 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.265885 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.265931 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.266005 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.298791 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:59:16.924652062 +0000 UTC Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.316002 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.316105 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.316128 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.316165 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.316255 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.316452 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.316650 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:06 crc kubenswrapper[5017]: E0129 06:36:06.316748 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.370134 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.370199 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.370220 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.370251 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.370295 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.473724 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.474419 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.474625 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.474798 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.474985 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.579058 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.579140 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.579167 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.579202 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.579234 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.682808 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.683368 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.683549 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.683792 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.684024 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.789433 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.789532 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.789549 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.789576 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.789595 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.894000 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.894079 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.894096 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.894123 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.894144 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.997133 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.997210 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.997229 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.997249 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:06 crc kubenswrapper[5017]: I0129 06:36:06.997260 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:06Z","lastTransitionTime":"2026-01-29T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.099890 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.099946 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.099975 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.099991 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.100001 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:07Z","lastTransitionTime":"2026-01-29T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.202627 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.202679 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.202693 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.202710 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.202719 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:07Z","lastTransitionTime":"2026-01-29T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.298939 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:47:17.397979367 +0000 UTC Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.305517 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.305607 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.305632 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.305668 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.305691 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:07Z","lastTransitionTime":"2026-01-29T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.409464 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.409551 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.409572 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.409604 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.409626 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:07Z","lastTransitionTime":"2026-01-29T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.513353 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.513437 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.513463 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.513495 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.513516 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:07Z","lastTransitionTime":"2026-01-29T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.617241 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.617334 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.617356 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.617391 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.617414 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:07Z","lastTransitionTime":"2026-01-29T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.721059 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.721112 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.721124 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.721144 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.721157 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:07Z","lastTransitionTime":"2026-01-29T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.825148 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.825253 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.825293 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.825335 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.825360 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:07Z","lastTransitionTime":"2026-01-29T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.928648 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.928728 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.928746 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.928774 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:07 crc kubenswrapper[5017]: I0129 06:36:07.928793 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:07Z","lastTransitionTime":"2026-01-29T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.033075 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.033157 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.033178 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.033218 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.033247 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.136200 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.136313 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.136332 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.136357 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.136374 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.238863 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.239015 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.239039 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.239069 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.239088 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.300930 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:39:26.666312229 +0000 UTC Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.315485 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.315613 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:08 crc kubenswrapper[5017]: E0129 06:36:08.315803 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:08 crc kubenswrapper[5017]: E0129 06:36:08.316010 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.316038 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:08 crc kubenswrapper[5017]: E0129 06:36:08.316266 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.316350 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:08 crc kubenswrapper[5017]: E0129 06:36:08.316462 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.342190 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.342273 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.342298 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.342343 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.342368 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.445487 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.445555 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.445574 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.445605 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.445620 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.549086 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.549162 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.549176 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.549206 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.549221 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.652327 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.652381 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.652393 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.652422 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.652440 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.755535 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.755578 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.755590 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.755608 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.755620 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.858188 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.858262 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.858283 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.858315 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.858340 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.961663 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.961747 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.961767 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.961807 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:08 crc kubenswrapper[5017]: I0129 06:36:08.961834 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:08Z","lastTransitionTime":"2026-01-29T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.065239 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.065348 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.065377 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.065421 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.065450 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.169034 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.169101 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.169114 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.169131 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.169144 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.272481 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.272519 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.272528 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.272543 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.272551 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.301793 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:44:34.294581424 +0000 UTC Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.377297 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.377454 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.377483 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.377519 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.377540 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.480821 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.480870 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.480880 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.480901 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.480913 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.584089 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.584156 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.584169 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.584190 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.584201 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.687259 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.687320 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.687337 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.687363 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.687381 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.790458 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.790520 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.790542 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.790566 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.790583 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.894272 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.894315 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.894324 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.894338 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.894349 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.996704 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.996743 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.996752 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.996766 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:09 crc kubenswrapper[5017]: I0129 06:36:09.996775 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:09Z","lastTransitionTime":"2026-01-29T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.099876 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.099993 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.100015 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.100043 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.100061 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.203845 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.203909 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.203921 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.203942 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.203953 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.302930 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:32:29.381792424 +0000 UTC Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.307083 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.307133 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.307142 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.307158 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.307167 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.315532 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.315560 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.315681 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.315803 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.315825 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.315888 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.315999 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.316078 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.327980 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.344846 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.351695 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.368414 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.383733 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.402687 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.410414 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.410455 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.410489 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.410508 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.410517 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.418131 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.432326 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.443539 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.457266 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.475518 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.492167 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.507675 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.513053 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.513102 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.513111 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.513128 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.513141 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.518861 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.542470 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.573010 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.588143 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.608935 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.616838 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.616886 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.616902 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.616928 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.616946 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.721081 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.721161 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.721183 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.721216 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.721243 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.750362 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.750638 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.750821 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.751022 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.751183 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.780250 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.787011 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.787150 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.787174 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.787203 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.788122 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.809908 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.814239 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.814284 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.814295 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.814311 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.814320 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.833672 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.838520 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.838552 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.838562 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.838578 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.838588 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.858453 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.863160 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.863194 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.863202 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.863217 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.863227 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.882696 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:10 crc kubenswrapper[5017]: E0129 06:36:10.882818 5017 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.885188 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.885272 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.885301 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.885528 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.885567 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.989262 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.989342 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.989355 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.989376 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:10 crc kubenswrapper[5017]: I0129 06:36:10.989392 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:10Z","lastTransitionTime":"2026-01-29T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.095826 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.095919 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.095932 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.096021 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.096039 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:11Z","lastTransitionTime":"2026-01-29T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.199716 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.199791 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.199814 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.199842 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.199863 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:11Z","lastTransitionTime":"2026-01-29T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.303143 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:42:35.628111778 +0000 UTC Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.303577 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.303611 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.303625 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.303646 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.303659 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:11Z","lastTransitionTime":"2026-01-29T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.407696 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.407782 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.407802 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.407829 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.407847 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:11Z","lastTransitionTime":"2026-01-29T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.510696 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.510763 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.510778 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.510809 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.510836 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:11Z","lastTransitionTime":"2026-01-29T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.615290 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.615361 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.615380 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.615407 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.615426 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:11Z","lastTransitionTime":"2026-01-29T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.718944 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.719028 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.719062 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.719085 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.719096 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:11Z","lastTransitionTime":"2026-01-29T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.822412 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.822478 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.822490 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.822511 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.822525 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:11Z","lastTransitionTime":"2026-01-29T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.925898 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.925989 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.926007 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.926038 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:11 crc kubenswrapper[5017]: I0129 06:36:11.926058 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:11Z","lastTransitionTime":"2026-01-29T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.029102 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.029165 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.029192 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.029222 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.029246 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.132312 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.132374 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.132382 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.132395 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.132404 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.235460 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.235522 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.235539 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.235574 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.235593 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.304249 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:28:15.428638264 +0000 UTC Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.316085 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.316158 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.316166 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.316221 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:12 crc kubenswrapper[5017]: E0129 06:36:12.316628 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:12 crc kubenswrapper[5017]: E0129 06:36:12.316743 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:12 crc kubenswrapper[5017]: E0129 06:36:12.316858 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:12 crc kubenswrapper[5017]: E0129 06:36:12.317003 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.338156 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.338208 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.338221 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.338244 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.338258 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.441681 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.441752 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.441766 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.441790 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.441807 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.545837 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.545906 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.545928 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.545986 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.546010 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.649819 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.649906 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.649933 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.650020 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.650050 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.753804 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.753856 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.753867 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.753885 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.753898 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.857387 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.857453 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.857469 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.857496 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.857515 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.962098 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.962644 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.962831 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.963037 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:12 crc kubenswrapper[5017]: I0129 06:36:12.963190 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:12Z","lastTransitionTime":"2026-01-29T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.066883 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.066969 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.066983 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.067007 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.067023 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.169744 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.169815 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.169834 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.169863 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.169883 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.273192 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.273243 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.273255 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.273275 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.273286 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.304941 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:43:00.250491342 +0000 UTC Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.317068 5017 scope.go:117] "RemoveContainer" containerID="dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f" Jan 29 06:36:13 crc kubenswrapper[5017]: E0129 06:36:13.317463 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.376246 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.376530 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.376598 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.376676 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.376741 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.481028 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.481596 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.481864 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.482116 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.482258 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.586115 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.586224 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.586255 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.586316 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.586345 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.688999 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.689069 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.689079 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.689095 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.689106 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.791309 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.791427 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.791445 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.791471 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.791486 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.894620 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.894673 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.894688 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.894710 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.894724 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.998474 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.998523 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.998535 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.998553 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:13 crc kubenswrapper[5017]: I0129 06:36:13.998565 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:13Z","lastTransitionTime":"2026-01-29T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.101483 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.101535 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.101546 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.101564 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.101576 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:14Z","lastTransitionTime":"2026-01-29T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.205403 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.205487 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.205502 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.205530 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.205545 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:14Z","lastTransitionTime":"2026-01-29T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.307176 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:52:02.263422122 +0000 UTC Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.312169 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.312227 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.312248 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.312275 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.312292 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:14Z","lastTransitionTime":"2026-01-29T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.315299 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.315432 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.315432 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.315485 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:14 crc kubenswrapper[5017]: E0129 06:36:14.315786 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:14 crc kubenswrapper[5017]: E0129 06:36:14.316020 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:14 crc kubenswrapper[5017]: E0129 06:36:14.316183 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:14 crc kubenswrapper[5017]: E0129 06:36:14.316247 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.339001 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.353290 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.367365 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.381262 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.397346 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.414602 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.414651 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.414666 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.414689 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.414704 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:14Z","lastTransitionTime":"2026-01-29T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.421667 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.437474 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.456487 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.476545 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.518259 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.518719 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.518847 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.518990 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.519223 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:14Z","lastTransitionTime":"2026-01-29T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.530621 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.555678 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.570406 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.586181 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.600658 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db3c7435-1911-4b57-871d-721088099b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1305407f047a3e06c10dc50b0bd12943e1330ff2d277680046be5a59ddeeaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://058b4b1e91f79b4a7fa8f73ecae30bea929f40f5f38db0faf800c45c93580ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8347eb7e0b582340e93bb623b71854b1b475a52c31cf20a88108a517dc2e2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.616554 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.622513 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.622706 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.622835 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.622999 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.623169 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:14Z","lastTransitionTime":"2026-01-29T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.627726 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.641887 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.726085 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.726146 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.726156 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.726175 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.726185 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:14Z","lastTransitionTime":"2026-01-29T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.828881 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.828971 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.828984 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.829004 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.829018 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:14Z","lastTransitionTime":"2026-01-29T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.932571 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.932637 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.932662 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.932694 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:14 crc kubenswrapper[5017]: I0129 06:36:14.932715 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:14Z","lastTransitionTime":"2026-01-29T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.035322 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.035402 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.035423 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.035469 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.035494 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.138661 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.138727 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.138745 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.138770 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.138788 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.241248 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.241311 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.241327 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.241349 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.241368 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.307912 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:34:38.024468252 +0000 UTC Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.344812 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.344848 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.344857 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.344875 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.344886 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.447689 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.447739 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.447753 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.447779 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.447798 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.551374 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.551419 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.551430 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.551451 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.551466 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.654779 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.654850 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.654868 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.654894 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.654913 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.758204 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.758274 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.758298 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.758324 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.758342 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.862026 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.862079 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.862090 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.862109 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.862119 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.965145 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.965193 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.965202 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.965220 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:15 crc kubenswrapper[5017]: I0129 06:36:15.965230 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:15Z","lastTransitionTime":"2026-01-29T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.069484 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.070098 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.070325 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.070545 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.070755 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:16Z","lastTransitionTime":"2026-01-29T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.185324 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.185746 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.185911 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.186037 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.186134 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:16Z","lastTransitionTime":"2026-01-29T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.289946 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.290551 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.290640 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.290758 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.290854 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:16Z","lastTransitionTime":"2026-01-29T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.308833 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:35:15.775563285 +0000 UTC Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.315327 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.315402 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.315532 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.315715 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:16 crc kubenswrapper[5017]: E0129 06:36:16.315923 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:16 crc kubenswrapper[5017]: E0129 06:36:16.316185 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:16 crc kubenswrapper[5017]: E0129 06:36:16.316281 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:16 crc kubenswrapper[5017]: E0129 06:36:16.316341 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.394797 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.395480 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.395505 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.395539 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.395567 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:16Z","lastTransitionTime":"2026-01-29T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.499205 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.499288 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.499311 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.499347 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.499373 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:16Z","lastTransitionTime":"2026-01-29T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.604107 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.604171 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.604189 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.604272 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.604289 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:16Z","lastTransitionTime":"2026-01-29T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.708514 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.708941 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.709249 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.709498 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.709667 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:16Z","lastTransitionTime":"2026-01-29T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.812524 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.812893 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.813038 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.813155 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.813269 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:16Z","lastTransitionTime":"2026-01-29T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.917801 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.918271 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.918357 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.918451 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:16 crc kubenswrapper[5017]: I0129 06:36:16.918529 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:16Z","lastTransitionTime":"2026-01-29T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.022028 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.022790 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.023019 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.023126 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.023219 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.127633 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.127681 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.127692 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.127713 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.127724 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.231327 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.231398 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.231420 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.231452 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.231485 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.309379 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:07:19.879543238 +0000 UTC Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.341631 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.341740 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.341772 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.341814 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.341852 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.446108 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.446234 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.446257 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.446293 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.446320 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.549620 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.549697 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.549716 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.549752 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.549815 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.653002 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.653069 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.653087 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.653113 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.653132 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.769898 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.769992 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.770011 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.770049 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.770085 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.872862 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.872902 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.872917 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.872936 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.872948 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.975927 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.976016 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.976108 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.976129 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:17 crc kubenswrapper[5017]: I0129 06:36:17.976143 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:17Z","lastTransitionTime":"2026-01-29T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.079249 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.079310 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.079325 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.079347 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.079362 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:18Z","lastTransitionTime":"2026-01-29T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.182440 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.182494 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.182504 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.182528 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.182540 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:18Z","lastTransitionTime":"2026-01-29T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.285725 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.285779 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.285792 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.285814 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.285830 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:18Z","lastTransitionTime":"2026-01-29T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.311092 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:36:14.349249775 +0000 UTC Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.316226 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:18 crc kubenswrapper[5017]: E0129 06:36:18.316441 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.316564 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.316712 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:18 crc kubenswrapper[5017]: E0129 06:36:18.316820 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.316863 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:18 crc kubenswrapper[5017]: E0129 06:36:18.316930 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:18 crc kubenswrapper[5017]: E0129 06:36:18.317087 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.389327 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.389378 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.389396 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.389420 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.389439 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:18Z","lastTransitionTime":"2026-01-29T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.492514 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.493032 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.493200 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.493343 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.493493 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:18Z","lastTransitionTime":"2026-01-29T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.597096 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.597157 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.597170 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.597195 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.597210 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:18Z","lastTransitionTime":"2026-01-29T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.699810 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.699854 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.699870 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.699885 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.699898 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:18Z","lastTransitionTime":"2026-01-29T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.803038 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.803122 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.803145 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.803182 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.803211 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:18Z","lastTransitionTime":"2026-01-29T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.905598 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.905636 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.905645 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.905662 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:18 crc kubenswrapper[5017]: I0129 06:36:18.905671 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:18Z","lastTransitionTime":"2026-01-29T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.008726 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.009225 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.009357 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.009463 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.009557 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.112712 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.112762 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.112776 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.112796 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.112808 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.215874 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.215936 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.215951 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.215999 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.216016 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.312017 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:37:15.681376575 +0000 UTC Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.318486 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.318579 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.318598 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.318624 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.318642 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.420719 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.420766 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.420777 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.420795 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.420809 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.523796 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.523846 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.523862 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.523884 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.523901 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.626646 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.626725 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.626743 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.626771 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.626791 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.729513 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.729569 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.729581 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.729603 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.729616 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.838291 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.838729 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.838806 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.838873 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.839052 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.942546 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.942598 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.942610 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.942628 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:19 crc kubenswrapper[5017]: I0129 06:36:19.942640 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:19Z","lastTransitionTime":"2026-01-29T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.045567 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.045612 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.045623 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.045640 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.045651 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.147562 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.147606 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.147618 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.147633 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.147644 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.249829 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.249897 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.249915 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.249946 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.250013 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.312406 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:17:17.55077622 +0000 UTC Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.315291 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:20 crc kubenswrapper[5017]: E0129 06:36:20.315428 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.315487 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.315547 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:20 crc kubenswrapper[5017]: E0129 06:36:20.315694 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.315711 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:20 crc kubenswrapper[5017]: E0129 06:36:20.315775 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:20 crc kubenswrapper[5017]: E0129 06:36:20.315940 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.352530 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.352588 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.352602 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.352623 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.352635 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.454750 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.454777 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.454788 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.454802 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.454813 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.557698 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.557750 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.557763 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.557781 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.557794 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.660088 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.660167 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.660179 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.660197 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.660228 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.763574 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.763631 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.763642 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.763660 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.763670 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.865945 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.866027 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.866041 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.866067 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.866081 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.968168 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.968216 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.968227 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.968244 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:20 crc kubenswrapper[5017]: I0129 06:36:20.968255 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:20Z","lastTransitionTime":"2026-01-29T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.015116 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:21 crc kubenswrapper[5017]: E0129 06:36:21.015273 5017 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:36:21 crc kubenswrapper[5017]: E0129 06:36:21.015351 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs podName:0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f nodeName:}" failed. No retries permitted until 2026-01-29 06:36:53.015333537 +0000 UTC m=+99.389781147 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs") pod "network-metrics-daemon-xn4bq" (UID: "0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.070741 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.070795 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.070807 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.070828 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.070843 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.162695 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.162733 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.162744 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.162762 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.162772 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: E0129 06:36:21.176810 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.180080 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.180119 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.180129 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.180148 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.180162 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: E0129 06:36:21.197194 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.201528 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.201577 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.201587 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.201604 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.201615 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: E0129 06:36:21.215097 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.218589 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.218645 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.218654 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.218669 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.218679 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: E0129 06:36:21.230708 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.233860 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.233904 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.233920 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.233948 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.233986 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: E0129 06:36:21.247974 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:21 crc kubenswrapper[5017]: E0129 06:36:21.248137 5017 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.249993 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.250043 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.250056 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.250077 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.250089 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.313284 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:44:39.152106268 +0000 UTC Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.353999 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.354060 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.354078 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.354103 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.354120 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.457302 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.457364 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.457377 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.457397 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.457410 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.561000 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.561067 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.561088 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.561113 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.561125 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.664117 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.664193 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.664208 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.664232 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.664244 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.767232 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.767310 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.767335 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.767366 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.767387 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.870633 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.870735 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.870754 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.870829 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.870848 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.972701 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.972745 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.972756 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.972769 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:21 crc kubenswrapper[5017]: I0129 06:36:21.972777 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:21Z","lastTransitionTime":"2026-01-29T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.075103 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.075161 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.075178 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.075199 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.075217 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:22Z","lastTransitionTime":"2026-01-29T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.177141 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.177193 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.177202 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.177218 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.177227 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:22Z","lastTransitionTime":"2026-01-29T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.279913 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.279973 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.279986 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.280004 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.280015 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:22Z","lastTransitionTime":"2026-01-29T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.313728 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:08:20.339472544 +0000 UTC Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.316263 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.316322 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.316474 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:22 crc kubenswrapper[5017]: E0129 06:36:22.316531 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.316584 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:22 crc kubenswrapper[5017]: E0129 06:36:22.317400 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:22 crc kubenswrapper[5017]: E0129 06:36:22.316716 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:22 crc kubenswrapper[5017]: E0129 06:36:22.316643 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.382485 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.382545 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.382553 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.382567 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.382577 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:22Z","lastTransitionTime":"2026-01-29T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.487024 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.487090 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.487105 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.487126 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.487139 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:22Z","lastTransitionTime":"2026-01-29T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.589005 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.589040 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.589049 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.589063 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.589072 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:22Z","lastTransitionTime":"2026-01-29T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.691673 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.691751 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.691768 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.691793 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.691816 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:22Z","lastTransitionTime":"2026-01-29T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.795002 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.795056 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.795074 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.795098 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.795115 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:22Z","lastTransitionTime":"2026-01-29T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.841425 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/0.log" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.841506 5017 generic.go:334] "Generic (PLEG): container finished" podID="8ae056f0-e054-45da-9638-73074b7c8a3b" containerID="17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f" exitCode=1 Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.841552 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9jkcd" event={"ID":"8ae056f0-e054-45da-9638-73074b7c8a3b","Type":"ContainerDied","Data":"17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.842129 5017 scope.go:117] "RemoveContainer" containerID="17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.862872 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.876852 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.891286 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.897503 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.897566 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.897577 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.897592 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.897601 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:22Z","lastTransitionTime":"2026-01-29T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.906392 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.920407 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db3c7435-1911-4b57-871d-721088099b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1305407f047a3e06c10dc50b0bd12943e1330ff2d277680046be5a59ddeeaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://058b4b1e91f79b4a7fa8f73ecae30bea929f40f5f38db0faf800c45c93580ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8347eb7e0b582340e93bb623b71854b1b475a52c31cf20a88108a517dc2e2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.933726 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.947524 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.966369 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.982644 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:22 crc kubenswrapper[5017]: I0129 06:36:22.992420 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.000657 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.000708 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.000717 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.000733 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.000742 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.002485 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.013746 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.027573 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.038150 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.053180 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.066668 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:22Z\\\",\\\"message\\\":\\\"2026-01-29T06:35:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87\\\\n2026-01-29T06:35:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87 to /host/opt/cni/bin/\\\\n2026-01-29T06:35:37Z [verbose] multus-daemon started\\\\n2026-01-29T06:35:37Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:36:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.086181 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.103307 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.103344 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.103353 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.103367 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.103376 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.206064 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.206113 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.206125 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.206157 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.206168 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.308789 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.308829 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.308838 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.308853 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.308875 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.314229 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:56:01.48225494 +0000 UTC Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.411608 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.411662 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.411671 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.411685 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.411697 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.513838 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.513890 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.513903 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.513920 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.513932 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.615941 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.615998 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.616008 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.616022 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.616032 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.719217 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.719266 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.719273 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.719291 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.719300 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.822035 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.822079 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.822088 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.822103 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.822112 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.846327 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/0.log" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.846390 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9jkcd" event={"ID":"8ae056f0-e054-45da-9638-73074b7c8a3b","Type":"ContainerStarted","Data":"a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.869555 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.882234 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.895410 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:22Z\\\",\\\"message\\\":\\\"2026-01-29T06:35:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87\\\\n2026-01-29T06:35:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87 to /host/opt/cni/bin/\\\\n2026-01-29T06:35:37Z [verbose] multus-daemon started\\\\n2026-01-29T06:35:37Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:36:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.906985 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.921477 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.924370 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.924441 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.924463 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.924492 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.924525 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:23Z","lastTransitionTime":"2026-01-29T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.934940 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.944887 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.957325 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.969094 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db3c7435-1911-4b57-871d-721088099b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1305407f047a3e06c10dc50b0bd12943e1330ff2d277680046be5a59ddeeaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://058b4b1e91f79b4a7fa8f73ecae30bea929f40f5f38db0faf800c45c93580ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8347eb7e0b582340e93bb623b71854b1b475a52c31cf20a88108a517dc2e2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.981128 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:23 crc kubenswrapper[5017]: I0129 06:36:23.991390 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.005428 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.018525 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.027187 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.027232 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.027243 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.027260 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.027271 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.030827 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.044996 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.056145 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.081267 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.129570 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.129630 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.129639 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.129673 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.129684 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.232613 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.232679 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.232690 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.232710 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.232722 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.315173 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:51:02.957075506 +0000 UTC Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.315325 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.315403 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.315414 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.315466 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:24 crc kubenswrapper[5017]: E0129 06:36:24.315516 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:24 crc kubenswrapper[5017]: E0129 06:36:24.315693 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:24 crc kubenswrapper[5017]: E0129 06:36:24.315725 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:24 crc kubenswrapper[5017]: E0129 06:36:24.315771 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.332572 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.336223 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.336262 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.336274 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.336292 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.336307 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.344972 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:22Z\\\",\\\"message\\\":\\\"2026-01-29T06:35:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87\\\\n2026-01-29T06:35:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87 to /host/opt/cni/bin/\\\\n2026-01-29T06:35:37Z [verbose] multus-daemon started\\\\n2026-01-29T06:35:37Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:36:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.362197 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.376727 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.387884 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db3c7435-1911-4b57-871d-721088099b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1305407f047a3e06c10dc50b0bd12943e1330ff2d277680046be5a59ddeeaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://058b4b1e91f79b4a7fa8f73ecae30bea929f40f5f38db0faf800c45c93580ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8347eb7e0b582340e93bb623b71854b1b475a52c31cf20a88108a517dc2e2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.399789 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.413273 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.427160 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.439117 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.439192 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.439216 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.439249 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.439272 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.439735 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.450380 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.463452 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.475032 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.484997 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.499227 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.509774 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.531215 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.542088 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:24Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.542945 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.543035 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.543047 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.543070 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.543081 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.646287 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.646341 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.646354 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.646375 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.646388 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.748591 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.749026 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.749041 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.749058 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.749070 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.851259 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.851338 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.851363 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.851392 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.851413 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.955223 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.955291 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.955310 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.955335 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:24 crc kubenswrapper[5017]: I0129 06:36:24.955385 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:24Z","lastTransitionTime":"2026-01-29T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.057346 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.057392 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.057403 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.057416 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.057425 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.159788 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.159838 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.159851 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.159870 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.159882 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.263297 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.263360 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.263368 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.263385 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.263397 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.316340 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:58:47.894292169 +0000 UTC Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.366633 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.366681 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.366694 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.366715 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.366728 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.469374 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.469418 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.469431 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.469447 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.469457 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.573487 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.573528 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.573539 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.573558 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.573568 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.676261 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.676301 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.676310 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.676323 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.676332 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.778228 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.778277 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.778287 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.778307 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.778320 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.883905 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.883974 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.883987 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.884008 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.884025 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.986624 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.986659 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.986668 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.986682 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:25 crc kubenswrapper[5017]: I0129 06:36:25.986692 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:25Z","lastTransitionTime":"2026-01-29T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.089977 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.090017 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.090028 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.090046 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.090059 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:26Z","lastTransitionTime":"2026-01-29T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.193385 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.193446 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.193464 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.193488 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.193508 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:26Z","lastTransitionTime":"2026-01-29T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.296135 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.296213 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.296222 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.296244 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.296258 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:26Z","lastTransitionTime":"2026-01-29T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.315611 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.315708 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:26 crc kubenswrapper[5017]: E0129 06:36:26.315776 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.315797 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.315794 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:26 crc kubenswrapper[5017]: E0129 06:36:26.315940 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:26 crc kubenswrapper[5017]: E0129 06:36:26.316067 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:26 crc kubenswrapper[5017]: E0129 06:36:26.316153 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.316518 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:52:13.910080664 +0000 UTC Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.399551 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.399612 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.399644 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.399663 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.399676 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:26Z","lastTransitionTime":"2026-01-29T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.503064 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.503128 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.503148 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.503179 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.503198 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:26Z","lastTransitionTime":"2026-01-29T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.607519 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.607585 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.607604 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.607631 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.607650 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:26Z","lastTransitionTime":"2026-01-29T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.712160 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.712245 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.712263 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.712291 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.712308 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:26Z","lastTransitionTime":"2026-01-29T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.817098 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.817196 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.817223 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.817264 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.817286 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:26Z","lastTransitionTime":"2026-01-29T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.920851 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.920908 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.920923 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.920942 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:26 crc kubenswrapper[5017]: I0129 06:36:26.920972 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:26Z","lastTransitionTime":"2026-01-29T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.024303 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.024356 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.024368 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.024386 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.024398 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.127852 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.127942 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.127972 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.127993 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.128013 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.231402 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.231454 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.231465 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.231484 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.231498 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.317029 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:01:34.098087379 +0000 UTC Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.334061 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.334160 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.334183 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.334221 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.334244 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.437935 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.438079 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.438096 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.438125 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.438145 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.540593 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.540641 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.540655 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.540671 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.540682 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.643465 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.643530 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.643544 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.643563 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.643575 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.746306 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.746359 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.746369 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.746389 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.746401 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.849751 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.849849 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.849886 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.849922 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.849944 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.953715 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.953806 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.953830 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.953863 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:27 crc kubenswrapper[5017]: I0129 06:36:27.953886 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:27Z","lastTransitionTime":"2026-01-29T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.057365 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.057420 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.057434 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.057460 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.057474 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.159988 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.160065 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.160087 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.160123 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.160146 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.263776 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.263867 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.263894 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.263927 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.263949 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.316284 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.316309 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.316420 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.316777 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:28 crc kubenswrapper[5017]: E0129 06:36:28.316920 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:28 crc kubenswrapper[5017]: E0129 06:36:28.317017 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:28 crc kubenswrapper[5017]: E0129 06:36:28.317100 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.317157 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:22:09.833661559 +0000 UTC Jan 29 06:36:28 crc kubenswrapper[5017]: E0129 06:36:28.317204 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.317258 5017 scope.go:117] "RemoveContainer" containerID="dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.366886 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.366933 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.366946 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.366997 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.367029 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.469273 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.469333 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.469348 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.469365 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.469382 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.573022 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.573068 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.573079 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.573103 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.573117 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.675721 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.675810 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.675830 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.675862 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.675889 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.778737 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.778827 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.778848 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.778882 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.778906 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.868217 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/2.log" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.870834 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.871415 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.883147 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.883227 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.883240 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.883260 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.883272 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.895938 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.915944 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:22Z\\\",\\\"message\\\":\\\"2026-01-29T06:35:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87\\\\n2026-01-29T06:35:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87 to /host/opt/cni/bin/\\\\n2026-01-29T06:35:37Z [verbose] multus-daemon started\\\\n2026-01-29T06:35:37Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:36:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.934312 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.945262 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.959846 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.976225 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db3c7435-1911-4b57-871d-721088099b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1305407f047a3e06c10dc50b0bd12943e1330ff2d277680046be5a59ddeeaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://058b4b1e91f79b4a7fa8f73ecae30bea929f40f5f38db0faf800c45c93580ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8347eb7e0b582340e93bb623b71854b1b475a52c31cf20a88108a517dc2e2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.987042 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.987112 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.987127 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.987155 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.987174 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:28Z","lastTransitionTime":"2026-01-29T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:28 crc kubenswrapper[5017]: I0129 06:36:28.992078 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.009766 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.024106 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.047460 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.060866 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.075189 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.086241 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.090697 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.090748 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.090758 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.090777 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.090789 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:29Z","lastTransitionTime":"2026-01-29T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.100742 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.115361 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.140689 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.154138 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.194508 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.195206 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.195518 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.195815 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.196067 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:29Z","lastTransitionTime":"2026-01-29T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.299917 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.300303 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.300384 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.300435 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.300457 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:29Z","lastTransitionTime":"2026-01-29T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.317953 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:54:08.848029027 +0000 UTC Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.402763 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.403289 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.403417 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.403500 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.403564 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:29Z","lastTransitionTime":"2026-01-29T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.507265 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.507347 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.507362 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.507379 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.507390 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:29Z","lastTransitionTime":"2026-01-29T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.611032 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.611091 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.611104 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.611125 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.611143 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:29Z","lastTransitionTime":"2026-01-29T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.713991 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.714044 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.714057 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.714075 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.714085 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:29Z","lastTransitionTime":"2026-01-29T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.818053 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.818123 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.818142 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.818167 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.818186 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:29Z","lastTransitionTime":"2026-01-29T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.877285 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/3.log" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.878215 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/2.log" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.882568 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" exitCode=1 Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.882664 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.882822 5017 scope.go:117] "RemoveContainer" containerID="dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.884117 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:36:29 crc kubenswrapper[5017]: E0129 06:36:29.884484 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.915615 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.923378 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.923433 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.923446 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.923464 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.923476 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:29Z","lastTransitionTime":"2026-01-29T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.933037 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.947728 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.960511 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:29 crc kubenswrapper[5017]: I0129 06:36:29.983927 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:22Z\\\",\\\"message\\\":\\\"2026-01-29T06:35:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87\\\\n2026-01-29T06:35:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87 to /host/opt/cni/bin/\\\\n2026-01-29T06:35:37Z [verbose] multus-daemon started\\\\n2026-01-29T06:35:37Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:36:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:29Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.016989 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd68f113fc859fc67ddb074145ba0c97e3d310e617acaf13f814e69bc9cc7c0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:35:59Z\\\",\\\"message\\\":\\\"650 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0129 06:35:59.345755 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-vwppb after 0 failed attempt(s)\\\\nI0129 06:35:59.346013 6650 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-vwppb\\\\nI0129 06:35:59.345735 6650 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-m2gbd in node crc\\\\nI0129 06:35:59.346035 6650 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-m2gbd after 0 failed attempt(s)\\\\nI0129 06:35:59.346041 6650 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-m2gbd\\\\nI0129 06:35:59.345688 6650 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9jkcd\\\\nI0129 06:35:59.346039 6650 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0129 06:35:59.346059 6650 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:35:59.346122 6650 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:29Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:36:29.236981 7037 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:36:29.237025 7037 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:36:29.237102 7037 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0129 06:36:29.237349 7037 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:36:29.247474 7037 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 06:36:29.247573 7037 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 06:36:29.247699 7037 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:36:29.247787 7037 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:36:29.248003 7037 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.026099 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.026171 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.026195 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.026224 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.026244 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.034654 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.048934 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.066754 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.089107 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.107788 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.125487 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.129656 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.129731 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.129760 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.129801 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.129832 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.146115 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.160097 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db3c7435-1911-4b57-871d-721088099b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1305407f047a3e06c10dc50b0bd12943e1330ff2d277680046be5a59ddeeaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://058b4b1e91f79b4a7fa8f73ecae30bea929f40f5f38db0faf800c45c93580ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8347eb7e0b582340e93bb623b71854b1b475a52c31cf20a88108a517dc2e2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.175642 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.192669 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.211252 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.232934 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.233034 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.233053 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.233081 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.233101 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.315980 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:30 crc kubenswrapper[5017]: E0129 06:36:30.316195 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.316529 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:30 crc kubenswrapper[5017]: E0129 06:36:30.316646 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.316877 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:30 crc kubenswrapper[5017]: E0129 06:36:30.317079 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.317374 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:30 crc kubenswrapper[5017]: E0129 06:36:30.317450 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.318094 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:25:04.007279676 +0000 UTC Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.336680 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.336750 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.336763 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.336782 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.336795 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.439245 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.439333 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.439358 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.439397 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.439420 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.543363 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.543460 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.543487 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.543527 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.543560 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.646883 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.646993 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.647022 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.647059 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.647087 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.751184 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.751258 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.751282 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.751314 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.751337 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.856266 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.856340 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.856366 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.856385 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.856396 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.890711 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/3.log" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.896873 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:36:30 crc kubenswrapper[5017]: E0129 06:36:30.897296 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.916249 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.938746 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.959878 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.961434 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.961504 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.961530 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.961569 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.961602 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:30Z","lastTransitionTime":"2026-01-29T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:30 crc kubenswrapper[5017]: I0129 06:36:30.987476 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.020266 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:29Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:36:29.236981 7037 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:36:29.237025 7037 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:36:29.237102 7037 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0129 06:36:29.237349 7037 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:36:29.247474 7037 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 06:36:29.247573 7037 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 06:36:29.247699 7037 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:36:29.247787 7037 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:36:29.248003 7037 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.044334 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.064635 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.064708 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.064722 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.064747 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.064764 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.066805 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:22Z\\\",\\\"message\\\":\\\"2026-01-29T06:35:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87\\\\n2026-01-29T06:35:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87 to /host/opt/cni/bin/\\\\n2026-01-29T06:35:37Z [verbose] multus-daemon started\\\\n2026-01-29T06:35:37Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:36:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.088360 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.112700 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.136539 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.156589 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.167529 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.167588 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.167607 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.167636 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.167657 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.183077 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.205431 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db3c7435-1911-4b57-871d-721088099b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1305407f047a3e06c10dc50b0bd12943e1330ff2d277680046be5a59ddeeaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://058b4b1e91f79b4a7fa8f73ecae30bea929f40f5f38db0faf800c45c93580ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8347eb7e0b582340e93bb623b71854b1b475a52c31cf20a88108a517dc2e2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.228949 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.252116 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.268725 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.275523 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.275589 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.275609 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.275645 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.275670 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.289927 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.319266 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:02:10.070502087 +0000 UTC Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.380390 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.380467 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.380485 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.380514 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.380536 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.455442 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.455531 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.455556 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.455591 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.455616 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: E0129 06:36:31.480671 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.487055 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.487100 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.487112 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.487138 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.487152 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: E0129 06:36:31.508249 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.515485 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.515588 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.515616 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.515651 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.515677 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: E0129 06:36:31.541856 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.548707 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.548748 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.548763 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.548785 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.548801 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: E0129 06:36:31.570255 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.575625 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.575684 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.575706 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.575732 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.575752 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: E0129 06:36:31.597088 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b652416b-993f-4447-94ce-fb2ce8447cbe\\\",\\\"systemUUID\\\":\\\"6f20d4ce-100b-4db6-aef5-b7c5d1dcba49\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:31 crc kubenswrapper[5017]: E0129 06:36:31.597331 5017 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.600019 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.600065 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.600083 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.600108 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.600123 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.703644 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.703701 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.703721 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.703748 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.703767 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.807838 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.807921 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.807949 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.808071 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.808125 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.912609 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.912677 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.912690 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.912715 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:31 crc kubenswrapper[5017]: I0129 06:36:31.912731 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:31Z","lastTransitionTime":"2026-01-29T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.016413 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.016475 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.016495 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.016522 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.016541 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.120455 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.120514 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.120543 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.120583 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.120611 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.225338 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.225459 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.225525 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.225551 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.225609 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.316172 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.316200 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.316246 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:32 crc kubenswrapper[5017]: E0129 06:36:32.316511 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:32 crc kubenswrapper[5017]: E0129 06:36:32.316602 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.316663 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:32 crc kubenswrapper[5017]: E0129 06:36:32.316758 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:32 crc kubenswrapper[5017]: E0129 06:36:32.316947 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.320257 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:35:13.493446496 +0000 UTC Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.329114 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.329171 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.329191 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.329215 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.329234 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.432372 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.432478 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.432503 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.432567 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.432588 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.535276 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.535342 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.535362 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.535389 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.535407 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.638849 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.638915 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.638928 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.638998 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.639017 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.742861 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.742952 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.743020 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.743056 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.743081 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.847079 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.847159 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.847182 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.847212 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.847232 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.950159 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.950229 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.950253 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.950279 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:32 crc kubenswrapper[5017]: I0129 06:36:32.950297 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:32Z","lastTransitionTime":"2026-01-29T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.052870 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.052940 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.052989 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.053025 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.053050 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.156146 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.156195 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.156212 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.156236 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.156248 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.259901 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.260026 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.260053 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.260087 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.260108 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.320761 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:14:13.522947866 +0000 UTC Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.334414 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.363771 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.363817 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.363832 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.363851 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.363867 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.467852 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.468025 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.468062 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.468103 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.468128 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.570853 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.570908 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.570921 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.570943 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.570978 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.673456 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.673541 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.673559 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.673585 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.673604 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.777094 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.777351 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.777375 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.777408 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.777431 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.880210 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.880252 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.880261 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.880277 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.880288 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.983737 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.983834 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.983846 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.983866 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:33 crc kubenswrapper[5017]: I0129 06:36:33.983882 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:33Z","lastTransitionTime":"2026-01-29T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.087395 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.090397 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.091111 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.091444 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.091476 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:34Z","lastTransitionTime":"2026-01-29T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.196299 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.196414 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.196436 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.196506 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.196524 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:34Z","lastTransitionTime":"2026-01-29T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.300828 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.300895 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.300914 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.300943 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.300991 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:34Z","lastTransitionTime":"2026-01-29T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.316138 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.316235 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:34 crc kubenswrapper[5017]: E0129 06:36:34.316325 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.316228 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:34 crc kubenswrapper[5017]: E0129 06:36:34.316455 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.316537 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:34 crc kubenswrapper[5017]: E0129 06:36:34.316549 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:34 crc kubenswrapper[5017]: E0129 06:36:34.316701 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.321337 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:17:15.630307346 +0000 UTC Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.339308 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0da6266-27aa-4408-a2f1-7b5521c9c867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d7e73818c30d09e31eca5e1aaea21b564796b5fb0445eb14884781ff264ea9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4427a913cdd88eae362ca579529cd4379af137323e8867938f098f9c83ecfce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4427a913cdd88eae362ca579529cd4379af137323e8867938f098f9c83ecfce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.362454 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bf710b24256eeb9149ca710dd9fcceae762df89486e0cf0f341c278eadda2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.388943 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9jkcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ae056f0-e054-45da-9638-73074b7c8a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:22Z\\\",\\\"message\\\":\\\"2026-01-29T06:35:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87\\\\n2026-01-29T06:35:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fcc27f29-703c-4a49-b4e2-49357dbaed87 to /host/opt/cni/bin/\\\\n2026-01-29T06:35:37Z [verbose] multus-daemon started\\\\n2026-01-29T06:35:37Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:36:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x84hz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9jkcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.404550 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.404609 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.404631 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.404670 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.404693 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:34Z","lastTransitionTime":"2026-01-29T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.430388 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02dd5727-894c-4693-9bc7-83dd88ce118c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:36:29Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:36:29.236981 7037 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:36:29.237025 7037 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:36:29.237102 7037 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0129 06:36:29.237349 7037 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:36:29.247474 7037 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 06:36:29.247573 7037 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 06:36:29.247699 7037 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:36:29.247787 7037 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 06:36:29.248003 7037 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tr2h2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wqgmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.451846 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.474388 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42db11f6-649f-486e-83a2-7506fdf51ba2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e7417d93094efd850372fea31fa798b5b9583cbf1daa36ebe18ade52c714304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db927333ed9f10e47997ba0073d04675b880319e46f4c6d46805d171bc2e15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qm9zl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-46ch5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.495455 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c67cd79-8431-401b-8f03-9387813b30ed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T06:35:33Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 06:35:28.003940 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 06:35:28.004898 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1201671994/tls.crt::/tmp/serving-cert-1201671994/tls.key\\\\\\\"\\\\nI0129 06:35:33.575231 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 06:35:33.579536 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 06:35:33.579564 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 06:35:33.579587 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 06:35:33.579594 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 06:35:33.587038 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 06:35:33.587075 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587080 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 06:35:33.587084 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 06:35:33.587087 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 06:35:33.587091 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 06:35:33.587094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 06:35:33.587049 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 06:35:33.588897 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.507731 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.507791 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.507811 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.507843 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.507864 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:34Z","lastTransitionTime":"2026-01-29T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.513078 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db3c7435-1911-4b57-871d-721088099b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:36:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1305407f047a3e06c10dc50b0bd12943e1330ff2d277680046be5a59ddeeaf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://058b4b1e91f79b4a7fa8f73ecae30bea929f40f5f38db0faf800c45c93580ef3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8347eb7e0b582340e93bb623b71854b1b475a52c31cf20a88108a517dc2e2e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bfc296503473169a1757c62cdb7f773e9dbb5a812855bb391664b4a91cc2655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.532475 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.547461 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.565502 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b8abb3dbc845f89735c58741f02550053402975d5edfe5e69f2323cc82bb8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7b335d7b6120621d4c94dfd8cdc86561a91d9c84a9481d352042d7728d7f6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.597426 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a2999ed-a16c-49e3-a07a-3b084d6b8616\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ff1f2d36640f2aebd2cd6919987f3f9000d95a8cefac7844a0671da192b1897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27acd0133bf699b73534766330da62590b4231608bd96e51bf01b06b5775073d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fc74339c29860f59a280af0f3e21a0f4fa2be4a3e28eb63d90bc63854805d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.610524 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.610579 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.610594 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.610621 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.610644 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:34Z","lastTransitionTime":"2026-01-29T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.616082 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ce95c49452e7355245f4ec2716c723873a116e2523b8192efe93cfb369260f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.632875 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vwppb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb597cc2-fff7-4d90-a43e-958791d83324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f41b8b485793972343992b2ec30940b852ff5f355e045225e8813683e9f9a3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g28w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vwppb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.653019 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://884bf08569779a1b94a927f1250657ab4d855e78f451a826856022a7c62cd6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5qdt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-895pl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.670798 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qwq46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4d3122b-b4b4-41ac-896a-566afdcda936\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45817069aa7e09fd97b77922f9b5a651f2a067991115021c0a85159a93d523af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z89l9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qwq46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.696620 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4036e581-21bf-4ea0-aaf5-84ab8a841888\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad091986feb992d2ef247a76476d77d58cb487cf0baa60f81e5d3ddf0d30c3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:35:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7399e8e88798a89139226e395d22a2aa61370296a3789f63d766f42ea1c9f4df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9418ae41c02e41fb3aeb7ce97e7d7b91f9a193ac23cbfe692cc79b071aa3adca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1065c2c8660d096dba79585cd7d954ae31491d258129e132b1e0cb1218bc849\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb8f183765b578cd4362f3d3c13e20b8cb70aec46adbcdb750164e91e2bd53e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ac8b597fcf07378d69cd788cf4f79e236218e22a58ffaa5a72688e650067c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8548b649f7fcef84467fc98f1b3aacc576c363a1c83087971aafaf8787734901\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:35:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvm87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-m2gbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.713087 5017 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:35:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgkx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:35:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xn4bq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.714005 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.714047 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.714062 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.714088 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.714104 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:34Z","lastTransitionTime":"2026-01-29T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.816873 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.816949 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.816979 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.816997 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.817007 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:34Z","lastTransitionTime":"2026-01-29T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.929202 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.929258 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.929271 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.929293 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:34 crc kubenswrapper[5017]: I0129 06:36:34.929305 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:34Z","lastTransitionTime":"2026-01-29T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.032255 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.032305 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.032321 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.032343 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.032357 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.135786 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.135845 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.135857 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.135877 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.135891 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.239328 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.239396 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.239415 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.239441 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.239463 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.321676 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:13:55.395003968 +0000 UTC Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.343932 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.344085 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.344116 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.344148 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.344175 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.448491 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.448558 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.448573 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.448596 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.448611 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.551756 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.551831 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.551853 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.551882 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.551904 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.655690 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.655754 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.655766 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.655792 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.655806 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.759387 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.759457 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.759471 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.759495 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.759511 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.863372 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.863453 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.863476 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.863506 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.863524 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.966928 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.967591 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.967619 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.967659 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:35 crc kubenswrapper[5017]: I0129 06:36:35.967686 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:35Z","lastTransitionTime":"2026-01-29T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.072130 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.072178 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.072193 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.072211 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.072224 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:36Z","lastTransitionTime":"2026-01-29T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.175653 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.175743 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.175769 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.175808 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.175833 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:36Z","lastTransitionTime":"2026-01-29T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.279447 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.279485 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.279494 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.279509 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.279519 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:36Z","lastTransitionTime":"2026-01-29T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.315580 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.315658 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.315622 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.315613 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:36 crc kubenswrapper[5017]: E0129 06:36:36.315823 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:36 crc kubenswrapper[5017]: E0129 06:36:36.316078 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:36 crc kubenswrapper[5017]: E0129 06:36:36.316222 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:36 crc kubenswrapper[5017]: E0129 06:36:36.316316 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.322550 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:21:55.131697445 +0000 UTC Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.382686 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.382734 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.382748 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.382768 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.382781 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:36Z","lastTransitionTime":"2026-01-29T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.486311 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.486385 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.486407 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.486437 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.486456 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:36Z","lastTransitionTime":"2026-01-29T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.594649 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.594712 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.594729 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.594757 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.594775 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:36Z","lastTransitionTime":"2026-01-29T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.698354 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.698426 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.698444 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.698473 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.698498 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:36Z","lastTransitionTime":"2026-01-29T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.802478 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.802542 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.802565 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.802593 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.802615 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:36Z","lastTransitionTime":"2026-01-29T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.905703 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.905769 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.905789 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.905815 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:36 crc kubenswrapper[5017]: I0129 06:36:36.905836 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:36Z","lastTransitionTime":"2026-01-29T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.009088 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.009178 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.009196 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.009230 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.009248 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.113213 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.113325 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.113346 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.113416 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.113445 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.217758 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.217826 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.217848 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.217876 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.217895 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.322406 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.322506 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.322529 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.322570 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.322594 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.322734 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:34:34.281631095 +0000 UTC Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.426732 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.426805 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.426825 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.426854 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.426877 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.531467 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.531574 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.531600 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.531632 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.531657 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.636239 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.636319 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.636343 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.636376 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.636399 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.739799 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.739881 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.739908 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.739938 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.740005 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.844068 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.844149 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.844176 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.844214 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.844238 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.924135 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:36:37 crc kubenswrapper[5017]: E0129 06:36:37.924448 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.924399713 +0000 UTC m=+148.298847363 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.947550 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.947600 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.947615 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.947639 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:37 crc kubenswrapper[5017]: I0129 06:36:37.947657 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:37Z","lastTransitionTime":"2026-01-29T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.025819 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.025907 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.026055 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026150 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026192 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026211 5017 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026253 5017 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026273 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026443 5017 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026470 5017 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026293 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.02626537 +0000 UTC m=+148.400712980 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026596 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.026556647 +0000 UTC m=+148.401004297 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.026629 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.026612449 +0000 UTC m=+148.401060089 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.051521 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.051584 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.051603 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.051632 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.051655 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.127357 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.127636 5017 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.127801 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.127765268 +0000 UTC m=+148.502212918 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.154741 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.154801 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.154823 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.154856 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.154875 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.263821 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.263896 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.263917 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.263950 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.264015 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.315606 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.315753 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.315844 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.315898 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.315753 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.316075 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.316201 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:38 crc kubenswrapper[5017]: E0129 06:36:38.316420 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.323168 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:47:04.563504802 +0000 UTC Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.368640 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.368765 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.368825 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.368864 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.368932 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.472466 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.472540 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.472556 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.472583 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.472601 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.576755 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.576833 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.576849 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.576879 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.576897 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.681331 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.681390 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.681407 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.681433 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.681455 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.785602 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.785664 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.785683 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.785711 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.785729 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.892670 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.892742 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.892764 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.892799 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.892822 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.996301 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.996358 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.996371 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.996394 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:38 crc kubenswrapper[5017]: I0129 06:36:38.996412 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:38Z","lastTransitionTime":"2026-01-29T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.099575 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.099636 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.099657 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.099683 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.099702 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:39Z","lastTransitionTime":"2026-01-29T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.203498 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.203562 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.203581 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.203612 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.203632 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:39Z","lastTransitionTime":"2026-01-29T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.307308 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.307365 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.307382 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.307412 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.307429 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:39Z","lastTransitionTime":"2026-01-29T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.324232 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:46:48.933632639 +0000 UTC Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.410881 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.411008 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.411032 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.411059 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.411076 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:39Z","lastTransitionTime":"2026-01-29T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.515117 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.515200 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.515228 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.515268 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.515290 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:39Z","lastTransitionTime":"2026-01-29T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.621406 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.621493 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.621522 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.621555 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.621574 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:39Z","lastTransitionTime":"2026-01-29T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.726063 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.726133 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.726151 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.726180 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.726200 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:39Z","lastTransitionTime":"2026-01-29T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.829749 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.829827 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.829866 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.829906 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.829929 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:39Z","lastTransitionTime":"2026-01-29T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.932853 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.932892 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.932904 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.932923 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:39 crc kubenswrapper[5017]: I0129 06:36:39.932936 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:39Z","lastTransitionTime":"2026-01-29T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.036249 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.036313 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.036331 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.036360 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.036384 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.140007 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.140068 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.140090 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.140117 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.140137 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.244034 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.244107 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.244133 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.244159 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.244178 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.315579 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.315784 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:40 crc kubenswrapper[5017]: E0129 06:36:40.316005 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.316178 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:40 crc kubenswrapper[5017]: E0129 06:36:40.316310 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:40 crc kubenswrapper[5017]: E0129 06:36:40.316456 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.317360 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:40 crc kubenswrapper[5017]: E0129 06:36:40.317931 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.324661 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 20:53:16.326693597 +0000 UTC Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.347572 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.347626 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.347650 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.347680 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.347701 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.451860 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.451919 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.451945 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.452029 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.452052 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.555939 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.556035 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.556056 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.556081 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.556099 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.659471 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.659538 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.659553 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.659582 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.659600 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.764474 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.764544 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.764570 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.764608 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.764631 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.868320 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.868400 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.868421 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.868455 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.868489 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.972149 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.972226 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.972244 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.972290 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:40 crc kubenswrapper[5017]: I0129 06:36:40.972308 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:40Z","lastTransitionTime":"2026-01-29T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.076099 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.076168 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.076185 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.076212 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.076233 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:41Z","lastTransitionTime":"2026-01-29T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.180951 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.181081 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.181108 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.181144 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.181171 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:41Z","lastTransitionTime":"2026-01-29T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.290574 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.290664 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.290695 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.290724 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.290747 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:41Z","lastTransitionTime":"2026-01-29T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.326627 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:38:44.605368956 +0000 UTC Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.395051 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.395110 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.395121 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.395261 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.395278 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:41Z","lastTransitionTime":"2026-01-29T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.498874 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.498944 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.498996 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.499023 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.499042 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:41Z","lastTransitionTime":"2026-01-29T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.603090 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.603157 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.603192 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.603221 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.603239 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:41Z","lastTransitionTime":"2026-01-29T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.708137 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.708189 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.708207 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.708234 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.708332 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:41Z","lastTransitionTime":"2026-01-29T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.802407 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.802601 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.802683 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.802777 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.802810 5017 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:36:41Z","lastTransitionTime":"2026-01-29T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.886234 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp"] Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.887154 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.892294 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.892491 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.892808 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.893738 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.944561 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-46ch5" podStartSLOduration=65.944525355 podStartE2EDuration="1m5.944525355s" podCreationTimestamp="2026-01-29 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:41.943800057 +0000 UTC m=+88.318247697" watchObservedRunningTime="2026-01-29 06:36:41.944525355 +0000 UTC m=+88.318973005" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.965925 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=67.965883329 podStartE2EDuration="1m7.965883329s" podCreationTimestamp="2026-01-29 06:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:41.964481644 +0000 UTC m=+88.338929324" watchObservedRunningTime="2026-01-29 06:36:41.965883329 +0000 UTC m=+88.340330979" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.974194 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e6330fe4-abf9-4574-a229-a1bc708c5ce0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.974295 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6330fe4-abf9-4574-a229-a1bc708c5ce0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.974334 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e6330fe4-abf9-4574-a229-a1bc708c5ce0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.974389 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6330fe4-abf9-4574-a229-a1bc708c5ce0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:41 crc kubenswrapper[5017]: I0129 06:36:41.974467 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6330fe4-abf9-4574-a229-a1bc708c5ce0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.004618 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=32.004585927 podStartE2EDuration="32.004585927s" podCreationTimestamp="2026-01-29 06:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:41.983796847 +0000 UTC m=+88.358244497" watchObservedRunningTime="2026-01-29 06:36:42.004585927 +0000 UTC m=+88.379033547" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.075904 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e6330fe4-abf9-4574-a229-a1bc708c5ce0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.076054 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6330fe4-abf9-4574-a229-a1bc708c5ce0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.076093 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e6330fe4-abf9-4574-a229-a1bc708c5ce0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.076145 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6330fe4-abf9-4574-a229-a1bc708c5ce0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.076223 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6330fe4-abf9-4574-a229-a1bc708c5ce0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.076778 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e6330fe4-abf9-4574-a229-a1bc708c5ce0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.076132 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e6330fe4-abf9-4574-a229-a1bc708c5ce0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.077718 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6330fe4-abf9-4574-a229-a1bc708c5ce0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.078730 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=67.078704541 podStartE2EDuration="1m7.078704541s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:42.078008893 +0000 UTC m=+88.452456543" watchObservedRunningTime="2026-01-29 06:36:42.078704541 +0000 UTC m=+88.453152191" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.092050 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6330fe4-abf9-4574-a229-a1bc708c5ce0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.112412 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6330fe4-abf9-4574-a229-a1bc708c5ce0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5gwnp\" (UID: \"e6330fe4-abf9-4574-a229-a1bc708c5ce0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.113566 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vwppb" podStartSLOduration=68.113545472 podStartE2EDuration="1m8.113545472s" podCreationTimestamp="2026-01-29 06:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:42.113313896 +0000 UTC m=+88.487761546" watchObservedRunningTime="2026-01-29 06:36:42.113545472 +0000 UTC m=+88.487993132" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.190153 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qwq46" podStartSLOduration=68.190121957 podStartE2EDuration="1m8.190121957s" podCreationTimestamp="2026-01-29 06:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:42.189679286 +0000 UTC m=+88.564126896" watchObservedRunningTime="2026-01-29 06:36:42.190121957 +0000 UTC m=+88.564569567" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.190546 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podStartSLOduration=68.190541887 podStartE2EDuration="1m8.190541887s" podCreationTimestamp="2026-01-29 06:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:42.172466415 +0000 UTC m=+88.546914035" watchObservedRunningTime="2026-01-29 06:36:42.190541887 +0000 UTC m=+88.564989497" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.222612 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.239522 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-m2gbd" podStartSLOduration=67.239501412 podStartE2EDuration="1m7.239501412s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:42.226923067 +0000 UTC m=+88.601370677" watchObservedRunningTime="2026-01-29 06:36:42.239501412 +0000 UTC m=+88.613949022" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.254225 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.254197059 podStartE2EDuration="9.254197059s" podCreationTimestamp="2026-01-29 06:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:42.253135733 +0000 UTC m=+88.627583343" watchObservedRunningTime="2026-01-29 06:36:42.254197059 +0000 UTC m=+88.628644669" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.289850 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9jkcd" podStartSLOduration=67.2898287 podStartE2EDuration="1m7.2898287s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:42.288996129 +0000 UTC m=+88.663443749" watchObservedRunningTime="2026-01-29 06:36:42.2898287 +0000 UTC m=+88.664276310" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.315937 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.315976 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.316038 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.316112 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:42 crc kubenswrapper[5017]: E0129 06:36:42.316221 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:42 crc kubenswrapper[5017]: E0129 06:36:42.316324 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:42 crc kubenswrapper[5017]: E0129 06:36:42.316467 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:42 crc kubenswrapper[5017]: E0129 06:36:42.316567 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.327199 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:09:49.420412401 +0000 UTC Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.327260 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.336289 5017 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.951843 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" event={"ID":"e6330fe4-abf9-4574-a229-a1bc708c5ce0","Type":"ContainerStarted","Data":"fdc4ba1b850e19f7e744b66142ff58b9dc60dcc1a982111c7da84035ba33c133"} Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.951948 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" event={"ID":"e6330fe4-abf9-4574-a229-a1bc708c5ce0","Type":"ContainerStarted","Data":"fae0d663dbeee906304c033ae1042275e413effdb4bf820c52c262d3c920da7f"} Jan 29 06:36:42 crc kubenswrapper[5017]: I0129 06:36:42.980397 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5gwnp" podStartSLOduration=68.980360528 podStartE2EDuration="1m8.980360528s" podCreationTimestamp="2026-01-29 06:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:36:42.978216524 +0000 UTC m=+89.352664134" watchObservedRunningTime="2026-01-29 06:36:42.980360528 +0000 UTC m=+89.354808168" Jan 29 06:36:44 crc kubenswrapper[5017]: I0129 06:36:44.316011 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:44 crc kubenswrapper[5017]: I0129 06:36:44.318351 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:44 crc kubenswrapper[5017]: E0129 06:36:44.318342 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:44 crc kubenswrapper[5017]: E0129 06:36:44.318611 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:44 crc kubenswrapper[5017]: I0129 06:36:44.319044 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:44 crc kubenswrapper[5017]: I0129 06:36:44.319133 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:44 crc kubenswrapper[5017]: E0129 06:36:44.319352 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:44 crc kubenswrapper[5017]: E0129 06:36:44.319605 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:44 crc kubenswrapper[5017]: I0129 06:36:44.320128 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:36:44 crc kubenswrapper[5017]: E0129 06:36:44.320611 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" Jan 29 06:36:46 crc kubenswrapper[5017]: I0129 06:36:46.316087 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:46 crc kubenswrapper[5017]: I0129 06:36:46.316181 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:46 crc kubenswrapper[5017]: I0129 06:36:46.316093 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:46 crc kubenswrapper[5017]: I0129 06:36:46.316202 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:46 crc kubenswrapper[5017]: E0129 06:36:46.316746 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:46 crc kubenswrapper[5017]: E0129 06:36:46.316847 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:46 crc kubenswrapper[5017]: E0129 06:36:46.316903 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:46 crc kubenswrapper[5017]: E0129 06:36:46.317110 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:48 crc kubenswrapper[5017]: I0129 06:36:48.316368 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:48 crc kubenswrapper[5017]: I0129 06:36:48.316387 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:48 crc kubenswrapper[5017]: E0129 06:36:48.316644 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:48 crc kubenswrapper[5017]: I0129 06:36:48.316387 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:48 crc kubenswrapper[5017]: E0129 06:36:48.317230 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:48 crc kubenswrapper[5017]: E0129 06:36:48.317292 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:48 crc kubenswrapper[5017]: I0129 06:36:48.317890 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:48 crc kubenswrapper[5017]: E0129 06:36:48.318095 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:50 crc kubenswrapper[5017]: I0129 06:36:50.316154 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:50 crc kubenswrapper[5017]: I0129 06:36:50.316334 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:50 crc kubenswrapper[5017]: E0129 06:36:50.316356 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:50 crc kubenswrapper[5017]: I0129 06:36:50.316465 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:50 crc kubenswrapper[5017]: I0129 06:36:50.316332 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:50 crc kubenswrapper[5017]: E0129 06:36:50.316557 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:50 crc kubenswrapper[5017]: E0129 06:36:50.316698 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:50 crc kubenswrapper[5017]: E0129 06:36:50.316872 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:52 crc kubenswrapper[5017]: I0129 06:36:52.315873 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:52 crc kubenswrapper[5017]: I0129 06:36:52.316104 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:52 crc kubenswrapper[5017]: I0129 06:36:52.315893 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:52 crc kubenswrapper[5017]: E0129 06:36:52.316162 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:52 crc kubenswrapper[5017]: I0129 06:36:52.316472 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:52 crc kubenswrapper[5017]: E0129 06:36:52.316617 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:52 crc kubenswrapper[5017]: E0129 06:36:52.316665 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:52 crc kubenswrapper[5017]: E0129 06:36:52.316817 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:53 crc kubenswrapper[5017]: I0129 06:36:53.113543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:53 crc kubenswrapper[5017]: E0129 06:36:53.113732 5017 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:36:53 crc kubenswrapper[5017]: E0129 06:36:53.114256 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs podName:0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f nodeName:}" failed. No retries permitted until 2026-01-29 06:37:57.114226387 +0000 UTC m=+163.488674007 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs") pod "network-metrics-daemon-xn4bq" (UID: "0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:36:54 crc kubenswrapper[5017]: I0129 06:36:54.315694 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:54 crc kubenswrapper[5017]: I0129 06:36:54.317996 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:54 crc kubenswrapper[5017]: E0129 06:36:54.317951 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:54 crc kubenswrapper[5017]: I0129 06:36:54.318048 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:54 crc kubenswrapper[5017]: E0129 06:36:54.318174 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:54 crc kubenswrapper[5017]: I0129 06:36:54.318241 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:54 crc kubenswrapper[5017]: E0129 06:36:54.318542 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:54 crc kubenswrapper[5017]: E0129 06:36:54.318818 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:56 crc kubenswrapper[5017]: I0129 06:36:56.315831 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:56 crc kubenswrapper[5017]: I0129 06:36:56.315884 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:56 crc kubenswrapper[5017]: I0129 06:36:56.316005 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:56 crc kubenswrapper[5017]: I0129 06:36:56.316078 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:56 crc kubenswrapper[5017]: E0129 06:36:56.316209 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:56 crc kubenswrapper[5017]: E0129 06:36:56.316299 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:56 crc kubenswrapper[5017]: E0129 06:36:56.316474 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:56 crc kubenswrapper[5017]: E0129 06:36:56.316711 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:57 crc kubenswrapper[5017]: I0129 06:36:57.317078 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:36:57 crc kubenswrapper[5017]: E0129 06:36:57.317467 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wqgmk_openshift-ovn-kubernetes(02dd5727-894c-4693-9bc7-83dd88ce118c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" Jan 29 06:36:58 crc kubenswrapper[5017]: I0129 06:36:58.315552 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:36:58 crc kubenswrapper[5017]: I0129 06:36:58.315604 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:36:58 crc kubenswrapper[5017]: I0129 06:36:58.315567 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:36:58 crc kubenswrapper[5017]: I0129 06:36:58.315756 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:36:58 crc kubenswrapper[5017]: E0129 06:36:58.315747 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:36:58 crc kubenswrapper[5017]: E0129 06:36:58.315892 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:36:58 crc kubenswrapper[5017]: E0129 06:36:58.317627 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:36:58 crc kubenswrapper[5017]: E0129 06:36:58.317786 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:36:59 crc kubenswrapper[5017]: I0129 06:36:59.340580 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 06:37:00 crc kubenswrapper[5017]: I0129 06:37:00.315612 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:00 crc kubenswrapper[5017]: I0129 06:37:00.317271 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:00 crc kubenswrapper[5017]: I0129 06:37:00.317371 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:00 crc kubenswrapper[5017]: I0129 06:37:00.317622 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:00 crc kubenswrapper[5017]: E0129 06:37:00.317515 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:00 crc kubenswrapper[5017]: E0129 06:37:00.321485 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:00 crc kubenswrapper[5017]: E0129 06:37:00.321604 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:00 crc kubenswrapper[5017]: E0129 06:37:00.321682 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:02 crc kubenswrapper[5017]: I0129 06:37:02.315405 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:02 crc kubenswrapper[5017]: I0129 06:37:02.315517 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:02 crc kubenswrapper[5017]: I0129 06:37:02.315435 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:02 crc kubenswrapper[5017]: E0129 06:37:02.315619 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:02 crc kubenswrapper[5017]: I0129 06:37:02.315687 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:02 crc kubenswrapper[5017]: E0129 06:37:02.315805 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:02 crc kubenswrapper[5017]: E0129 06:37:02.315937 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:02 crc kubenswrapper[5017]: E0129 06:37:02.316127 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:04 crc kubenswrapper[5017]: I0129 06:37:04.315595 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:04 crc kubenswrapper[5017]: I0129 06:37:04.315612 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:04 crc kubenswrapper[5017]: I0129 06:37:04.315663 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:04 crc kubenswrapper[5017]: I0129 06:37:04.315700 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:04 crc kubenswrapper[5017]: E0129 06:37:04.318331 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:04 crc kubenswrapper[5017]: E0129 06:37:04.318514 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:04 crc kubenswrapper[5017]: E0129 06:37:04.318744 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:04 crc kubenswrapper[5017]: E0129 06:37:04.318864 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:04 crc kubenswrapper[5017]: I0129 06:37:04.358546 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.358507955 podStartE2EDuration="5.358507955s" podCreationTimestamp="2026-01-29 06:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:04.355667373 +0000 UTC m=+110.730115023" watchObservedRunningTime="2026-01-29 06:37:04.358507955 +0000 UTC m=+110.732955605" Jan 29 06:37:06 crc kubenswrapper[5017]: I0129 06:37:06.316017 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:06 crc kubenswrapper[5017]: I0129 06:37:06.316063 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:06 crc kubenswrapper[5017]: I0129 06:37:06.316020 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:06 crc kubenswrapper[5017]: I0129 06:37:06.316042 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:06 crc kubenswrapper[5017]: E0129 06:37:06.316290 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:06 crc kubenswrapper[5017]: E0129 06:37:06.316557 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:06 crc kubenswrapper[5017]: E0129 06:37:06.316606 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:06 crc kubenswrapper[5017]: E0129 06:37:06.316656 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:08 crc kubenswrapper[5017]: I0129 06:37:08.315177 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:08 crc kubenswrapper[5017]: I0129 06:37:08.315245 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:08 crc kubenswrapper[5017]: I0129 06:37:08.315338 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:08 crc kubenswrapper[5017]: E0129 06:37:08.315562 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:08 crc kubenswrapper[5017]: I0129 06:37:08.315612 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:08 crc kubenswrapper[5017]: E0129 06:37:08.315856 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:08 crc kubenswrapper[5017]: E0129 06:37:08.315765 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:08 crc kubenswrapper[5017]: E0129 06:37:08.315917 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:09 crc kubenswrapper[5017]: I0129 06:37:09.060676 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/1.log" Jan 29 06:37:09 crc kubenswrapper[5017]: I0129 06:37:09.061578 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/0.log" Jan 29 06:37:09 crc kubenswrapper[5017]: I0129 06:37:09.061616 5017 generic.go:334] "Generic (PLEG): container finished" podID="8ae056f0-e054-45da-9638-73074b7c8a3b" containerID="a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252" exitCode=1 Jan 29 06:37:09 crc kubenswrapper[5017]: I0129 06:37:09.061652 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9jkcd" event={"ID":"8ae056f0-e054-45da-9638-73074b7c8a3b","Type":"ContainerDied","Data":"a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252"} Jan 29 06:37:09 crc kubenswrapper[5017]: I0129 06:37:09.061688 5017 scope.go:117] "RemoveContainer" containerID="17d52fc64784408e447773d44de6e83d32029a77073a0c1ee206a51d81eda62f" Jan 29 06:37:09 crc kubenswrapper[5017]: I0129 06:37:09.062111 5017 scope.go:117] "RemoveContainer" containerID="a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252" Jan 29 06:37:09 crc kubenswrapper[5017]: E0129 06:37:09.062331 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9jkcd_openshift-multus(8ae056f0-e054-45da-9638-73074b7c8a3b)\"" pod="openshift-multus/multus-9jkcd" podUID="8ae056f0-e054-45da-9638-73074b7c8a3b" Jan 29 06:37:10 crc kubenswrapper[5017]: I0129 06:37:10.067305 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/1.log" Jan 29 06:37:10 crc kubenswrapper[5017]: I0129 06:37:10.315597 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:10 crc kubenswrapper[5017]: I0129 06:37:10.315635 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:10 crc kubenswrapper[5017]: I0129 06:37:10.315683 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:10 crc kubenswrapper[5017]: E0129 06:37:10.316489 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:10 crc kubenswrapper[5017]: I0129 06:37:10.315726 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:10 crc kubenswrapper[5017]: E0129 06:37:10.316683 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:10 crc kubenswrapper[5017]: E0129 06:37:10.318143 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:10 crc kubenswrapper[5017]: I0129 06:37:10.318553 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:37:10 crc kubenswrapper[5017]: E0129 06:37:10.318613 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:11 crc kubenswrapper[5017]: I0129 06:37:11.077258 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/3.log" Jan 29 06:37:11 crc kubenswrapper[5017]: I0129 06:37:11.080985 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerStarted","Data":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} Jan 29 06:37:11 crc kubenswrapper[5017]: I0129 06:37:11.081529 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:37:11 crc kubenswrapper[5017]: I0129 06:37:11.229576 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podStartSLOduration=96.22955702 podStartE2EDuration="1m36.22955702s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:11.116052771 +0000 UTC m=+117.490500381" watchObservedRunningTime="2026-01-29 06:37:11.22955702 +0000 UTC m=+117.604004630" Jan 29 06:37:11 crc kubenswrapper[5017]: I0129 06:37:11.230324 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xn4bq"] Jan 29 06:37:11 crc kubenswrapper[5017]: I0129 06:37:11.230463 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:11 crc kubenswrapper[5017]: E0129 06:37:11.230605 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:12 crc kubenswrapper[5017]: I0129 06:37:12.315391 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:12 crc kubenswrapper[5017]: I0129 06:37:12.315391 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:12 crc kubenswrapper[5017]: E0129 06:37:12.315642 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:12 crc kubenswrapper[5017]: E0129 06:37:12.315764 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:12 crc kubenswrapper[5017]: I0129 06:37:12.315521 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:12 crc kubenswrapper[5017]: E0129 06:37:12.316030 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:13 crc kubenswrapper[5017]: I0129 06:37:13.315169 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:13 crc kubenswrapper[5017]: E0129 06:37:13.315362 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:14 crc kubenswrapper[5017]: E0129 06:37:14.281271 5017 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 06:37:14 crc kubenswrapper[5017]: I0129 06:37:14.315441 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:14 crc kubenswrapper[5017]: I0129 06:37:14.315480 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:14 crc kubenswrapper[5017]: I0129 06:37:14.316988 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:14 crc kubenswrapper[5017]: E0129 06:37:14.316979 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:14 crc kubenswrapper[5017]: E0129 06:37:14.317094 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:14 crc kubenswrapper[5017]: E0129 06:37:14.317229 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:14 crc kubenswrapper[5017]: E0129 06:37:14.419921 5017 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 06:37:15 crc kubenswrapper[5017]: I0129 06:37:15.315903 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:15 crc kubenswrapper[5017]: E0129 06:37:15.316109 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:16 crc kubenswrapper[5017]: I0129 06:37:16.315665 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:16 crc kubenswrapper[5017]: E0129 06:37:16.316175 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:16 crc kubenswrapper[5017]: I0129 06:37:16.315789 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:16 crc kubenswrapper[5017]: E0129 06:37:16.316279 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:16 crc kubenswrapper[5017]: I0129 06:37:16.315721 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:16 crc kubenswrapper[5017]: E0129 06:37:16.316547 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:17 crc kubenswrapper[5017]: I0129 06:37:17.316006 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:17 crc kubenswrapper[5017]: E0129 06:37:17.317200 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:18 crc kubenswrapper[5017]: I0129 06:37:18.316238 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:18 crc kubenswrapper[5017]: I0129 06:37:18.316277 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:18 crc kubenswrapper[5017]: E0129 06:37:18.316405 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:18 crc kubenswrapper[5017]: I0129 06:37:18.316451 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:18 crc kubenswrapper[5017]: E0129 06:37:18.316599 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:18 crc kubenswrapper[5017]: E0129 06:37:18.316702 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:19 crc kubenswrapper[5017]: I0129 06:37:19.315585 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:19 crc kubenswrapper[5017]: E0129 06:37:19.315795 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:19 crc kubenswrapper[5017]: E0129 06:37:19.421403 5017 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 06:37:20 crc kubenswrapper[5017]: I0129 06:37:20.316040 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:20 crc kubenswrapper[5017]: I0129 06:37:20.316040 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:20 crc kubenswrapper[5017]: E0129 06:37:20.316226 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:20 crc kubenswrapper[5017]: I0129 06:37:20.316294 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:20 crc kubenswrapper[5017]: E0129 06:37:20.316356 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:20 crc kubenswrapper[5017]: E0129 06:37:20.316415 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:21 crc kubenswrapper[5017]: I0129 06:37:21.315112 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:21 crc kubenswrapper[5017]: E0129 06:37:21.315254 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:22 crc kubenswrapper[5017]: I0129 06:37:22.315903 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:22 crc kubenswrapper[5017]: I0129 06:37:22.316019 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:22 crc kubenswrapper[5017]: I0129 06:37:22.316168 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:22 crc kubenswrapper[5017]: E0129 06:37:22.316158 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:22 crc kubenswrapper[5017]: E0129 06:37:22.316554 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:22 crc kubenswrapper[5017]: E0129 06:37:22.316482 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:23 crc kubenswrapper[5017]: I0129 06:37:23.317868 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:23 crc kubenswrapper[5017]: E0129 06:37:23.318104 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:23 crc kubenswrapper[5017]: I0129 06:37:23.321801 5017 scope.go:117] "RemoveContainer" containerID="a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252" Jan 29 06:37:24 crc kubenswrapper[5017]: I0129 06:37:24.137380 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/1.log" Jan 29 06:37:24 crc kubenswrapper[5017]: I0129 06:37:24.138065 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9jkcd" event={"ID":"8ae056f0-e054-45da-9638-73074b7c8a3b","Type":"ContainerStarted","Data":"81abd23058c4d929cf01618abcab59dae4c62bd31ead39da8d9483cff0713526"} Jan 29 06:37:24 crc kubenswrapper[5017]: I0129 06:37:24.315462 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:24 crc kubenswrapper[5017]: I0129 06:37:24.315544 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:24 crc kubenswrapper[5017]: I0129 06:37:24.315568 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:24 crc kubenswrapper[5017]: E0129 06:37:24.316509 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:24 crc kubenswrapper[5017]: E0129 06:37:24.316611 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:24 crc kubenswrapper[5017]: E0129 06:37:24.316714 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:24 crc kubenswrapper[5017]: E0129 06:37:24.422017 5017 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 06:37:25 crc kubenswrapper[5017]: I0129 06:37:25.315147 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:25 crc kubenswrapper[5017]: E0129 06:37:25.315529 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:26 crc kubenswrapper[5017]: I0129 06:37:26.316291 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:26 crc kubenswrapper[5017]: I0129 06:37:26.316356 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:26 crc kubenswrapper[5017]: I0129 06:37:26.316356 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:26 crc kubenswrapper[5017]: E0129 06:37:26.316490 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:26 crc kubenswrapper[5017]: E0129 06:37:26.316744 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:26 crc kubenswrapper[5017]: E0129 06:37:26.316833 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:27 crc kubenswrapper[5017]: I0129 06:37:27.316197 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:27 crc kubenswrapper[5017]: E0129 06:37:27.316436 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:28 crc kubenswrapper[5017]: I0129 06:37:28.315325 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:28 crc kubenswrapper[5017]: I0129 06:37:28.315387 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:28 crc kubenswrapper[5017]: I0129 06:37:28.315396 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:28 crc kubenswrapper[5017]: E0129 06:37:28.315495 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:37:28 crc kubenswrapper[5017]: E0129 06:37:28.315591 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:37:28 crc kubenswrapper[5017]: E0129 06:37:28.315778 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:37:28 crc kubenswrapper[5017]: I0129 06:37:28.549307 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:37:29 crc kubenswrapper[5017]: I0129 06:37:29.316065 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:29 crc kubenswrapper[5017]: E0129 06:37:29.316278 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xn4bq" podUID="0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f" Jan 29 06:37:30 crc kubenswrapper[5017]: I0129 06:37:30.317464 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:30 crc kubenswrapper[5017]: I0129 06:37:30.317761 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:30 crc kubenswrapper[5017]: I0129 06:37:30.317860 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:30 crc kubenswrapper[5017]: I0129 06:37:30.321921 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 06:37:30 crc kubenswrapper[5017]: I0129 06:37:30.322139 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 06:37:30 crc kubenswrapper[5017]: I0129 06:37:30.322142 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 06:37:30 crc kubenswrapper[5017]: I0129 06:37:30.322200 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 06:37:31 crc kubenswrapper[5017]: I0129 06:37:31.315547 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:31 crc kubenswrapper[5017]: I0129 06:37:31.320567 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 06:37:31 crc kubenswrapper[5017]: I0129 06:37:31.322996 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.926095 5017 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.969754 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch"] Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.970330 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.970758 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cbp4z"] Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.971426 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn"] Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.971641 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.971820 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.972914 5017 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.972981 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.973201 5017 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.973222 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.973267 5017 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.973281 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.973323 5017 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.973339 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.973473 5017 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.973491 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.973539 5017 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.973553 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.973592 5017 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.973607 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.973725 5017 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.973742 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.974294 5017 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.974344 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.975863 5017 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.975918 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.976061 5017 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.976076 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.976110 5017 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.976123 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.976165 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zqqn"] Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.976840 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4"] Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.977244 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.977736 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.977825 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf"] Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.978570 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.979009 5017 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.979055 5017 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.979009 5017 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.979080 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.979055 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.979084 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.979095 5017 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.979175 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.979121 5017 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.979239 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.980383 5017 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.980416 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.980454 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs"] Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.980870 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.981469 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9nv7f"] Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.982006 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9nv7f" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.983724 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fbqdg"] Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.984200 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.987599 5017 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.987649 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.987757 5017 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.987772 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.989118 5017 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.989140 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.989164 5017 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.989232 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.989268 5017 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: secrets "oauth-apiserver-sa-dockercfg-6r2bq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.989374 5017 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: secrets "samples-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.989393 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"samples-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.989355 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-apiserver-sa-dockercfg-6r2bq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.989439 5017 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.989458 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.989460 5017 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.989484 5017 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.989511 5017 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.989500 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.989511 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.989548 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.989828 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.989870 5017 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: secrets "config-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.989891 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"config-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.990102 5017 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.990119 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.990186 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.990264 5017 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: secrets "cluster-samples-operator-dockercfg-xpp9w" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.990279 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-samples-operator-dockercfg-xpp9w\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.990564 5017 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.990584 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.990614 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.990614 5017 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.990655 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.990916 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 06:37:32 crc kubenswrapper[5017]: W0129 06:37:32.991081 5017 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Jan 29 06:37:32 crc kubenswrapper[5017]: E0129 06:37:32.991101 5017 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.995104 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.997220 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 06:37:32 crc kubenswrapper[5017]: I0129 06:37:32.997913 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.026848 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.026923 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.027108 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.027215 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.027302 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.027398 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.027462 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.027551 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.028370 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.030257 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pd9qw"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.031015 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.031276 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-plx6n"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.032014 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.036723 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.037388 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.042111 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.042173 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.042274 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.042329 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.042849 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.043075 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.043547 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.043717 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.043945 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.044120 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.044290 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.046842 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.047083 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.050380 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-92jgp"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.050939 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-z5brc"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.051293 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.051621 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sckkt"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.052052 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.052262 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.052544 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.053172 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.053416 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.053613 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.053791 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.054127 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.054262 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.054448 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.059853 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.060195 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.060200 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.060442 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.060469 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.060577 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.060734 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.060848 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.061013 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.061132 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.061138 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.061427 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.061507 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.062435 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.066528 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.066831 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rlfl6"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.067223 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.067735 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.067792 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.067901 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068091 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068152 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068232 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068275 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068326 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068414 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068233 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068535 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068615 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068627 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068421 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.068756 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.070421 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.070558 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.072538 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.073156 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.073183 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.073311 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.073933 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.075936 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.076586 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.086862 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dhb9x"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.094844 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.102796 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.103054 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.126802 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.127304 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.128213 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.129103 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.129226 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.129241 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-audit\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.129434 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133492 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9889e9bc-b25e-4272-ad18-474c0d2fa2ce-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tqcv\" (UID: \"9889e9bc-b25e-4272-ad18-474c0d2fa2ce\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133527 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f1aae3-1ca6-43d6-9a83-be14081c28df-serving-cert\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133554 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9889e9bc-b25e-4272-ad18-474c0d2fa2ce-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tqcv\" (UID: \"9889e9bc-b25e-4272-ad18-474c0d2fa2ce\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133594 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d97f5ac-52f0-43fe-9a35-662481fe2c83-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133611 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-etcd-serving-ca\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133628 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c4f1aae3-1ca6-43d6-9a83-be14081c28df-node-pullsecrets\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133644 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-image-import-ca\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133661 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tct5v\" (UniqueName: \"kubernetes.io/projected/9889e9bc-b25e-4272-ad18-474c0d2fa2ce-kube-api-access-tct5v\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tqcv\" (UID: \"9889e9bc-b25e-4272-ad18-474c0d2fa2ce\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133686 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84ff2015-09a3-41e9-be73-b2952f0aebcf-trusted-ca\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133701 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d97f5ac-52f0-43fe-9a35-662481fe2c83-metrics-tls\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133714 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d97f5ac-52f0-43fe-9a35-662481fe2c83-trusted-ca\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133730 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4f1aae3-1ca6-43d6-9a83-be14081c28df-etcd-client\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133755 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4f1aae3-1ca6-43d6-9a83-be14081c28df-encryption-config\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133769 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6jbr\" (UniqueName: \"kubernetes.io/projected/c4f1aae3-1ca6-43d6-9a83-be14081c28df-kube-api-access-b6jbr\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133785 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ff2015-09a3-41e9-be73-b2952f0aebcf-config\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133808 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knrsf\" (UniqueName: \"kubernetes.io/projected/84ff2015-09a3-41e9-be73-b2952f0aebcf-kube-api-access-knrsf\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133839 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hp88\" (UniqueName: \"kubernetes.io/projected/8d97f5ac-52f0-43fe-9a35-662481fe2c83-kube-api-access-5hp88\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133860 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2vsq\" (UniqueName: \"kubernetes.io/projected/0d65eec8-87c6-4fd9-9cca-d784e7e25232-kube-api-access-z2vsq\") pod \"migrator-59844c95c7-6tsc9\" (UID: \"0d65eec8-87c6-4fd9-9cca-d784e7e25232\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133877 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84ff2015-09a3-41e9-be73-b2952f0aebcf-serving-cert\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.130149 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pws6m"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133924 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-config\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133948 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4f1aae3-1ca6-43d6-9a83-be14081c28df-audit-dir\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.131284 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.133373 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.134690 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.135463 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.135773 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cdswz"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.136157 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.136302 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.136401 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.137302 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.137324 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.138717 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.139596 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.139597 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.141067 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k9gnr"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.141540 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.141763 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.143782 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.144434 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.146254 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.146927 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.153258 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.156694 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.163248 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.163590 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cbp4z"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.163618 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.163630 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.164409 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.164405 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.164642 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8chq"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.164873 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.165014 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fbqdg"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.165029 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zqqn"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.165129 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.165949 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9nv7f"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.174010 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-92jgp"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.174182 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.175461 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.179371 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.179503 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.186294 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.187093 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.189430 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.189456 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jnctv"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.190373 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.191173 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.192407 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.192420 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.193462 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cdswz"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.195411 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.202228 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.204268 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.212288 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.212365 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-plx6n"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.213117 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pd9qw"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.221431 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.223051 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.226369 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z5brc"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.229251 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.232700 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pws6m"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.232755 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.234002 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.234751 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-config\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.234814 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4f1aae3-1ca6-43d6-9a83-be14081c28df-audit-dir\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.234844 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-audit\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.234880 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.234914 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9889e9bc-b25e-4272-ad18-474c0d2fa2ce-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tqcv\" (UID: \"9889e9bc-b25e-4272-ad18-474c0d2fa2ce\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.234974 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f1aae3-1ca6-43d6-9a83-be14081c28df-serving-cert\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235001 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9889e9bc-b25e-4272-ad18-474c0d2fa2ce-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tqcv\" (UID: \"9889e9bc-b25e-4272-ad18-474c0d2fa2ce\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235039 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d97f5ac-52f0-43fe-9a35-662481fe2c83-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235091 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-etcd-serving-ca\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235114 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c4f1aae3-1ca6-43d6-9a83-be14081c28df-node-pullsecrets\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235138 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-image-import-ca\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235163 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tct5v\" (UniqueName: \"kubernetes.io/projected/9889e9bc-b25e-4272-ad18-474c0d2fa2ce-kube-api-access-tct5v\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tqcv\" (UID: \"9889e9bc-b25e-4272-ad18-474c0d2fa2ce\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235188 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84ff2015-09a3-41e9-be73-b2952f0aebcf-trusted-ca\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235205 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235211 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d97f5ac-52f0-43fe-9a35-662481fe2c83-metrics-tls\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235277 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d97f5ac-52f0-43fe-9a35-662481fe2c83-trusted-ca\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235303 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4f1aae3-1ca6-43d6-9a83-be14081c28df-etcd-client\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235341 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6jbr\" (UniqueName: \"kubernetes.io/projected/c4f1aae3-1ca6-43d6-9a83-be14081c28df-kube-api-access-b6jbr\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235363 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ff2015-09a3-41e9-be73-b2952f0aebcf-config\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235384 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4f1aae3-1ca6-43d6-9a83-be14081c28df-encryption-config\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235406 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knrsf\" (UniqueName: \"kubernetes.io/projected/84ff2015-09a3-41e9-be73-b2952f0aebcf-kube-api-access-knrsf\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235461 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hp88\" (UniqueName: \"kubernetes.io/projected/8d97f5ac-52f0-43fe-9a35-662481fe2c83-kube-api-access-5hp88\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235509 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2vsq\" (UniqueName: \"kubernetes.io/projected/0d65eec8-87c6-4fd9-9cca-d784e7e25232-kube-api-access-z2vsq\") pod \"migrator-59844c95c7-6tsc9\" (UID: \"0d65eec8-87c6-4fd9-9cca-d784e7e25232\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.235539 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84ff2015-09a3-41e9-be73-b2952f0aebcf-serving-cert\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.236120 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9889e9bc-b25e-4272-ad18-474c0d2fa2ce-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tqcv\" (UID: \"9889e9bc-b25e-4272-ad18-474c0d2fa2ce\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.236677 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k9gnr"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.236710 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-config\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.237725 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-image-import-ca\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.238477 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-etcd-serving-ca\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.238551 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c4f1aae3-1ca6-43d6-9a83-be14081c28df-node-pullsecrets\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.238846 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.238972 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4f1aae3-1ca6-43d6-9a83-be14081c28df-audit-dir\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.239711 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84ff2015-09a3-41e9-be73-b2952f0aebcf-trusted-ca\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.240354 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-audit\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.240479 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d97f5ac-52f0-43fe-9a35-662481fe2c83-trusted-ca\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.243012 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9889e9bc-b25e-4272-ad18-474c0d2fa2ce-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tqcv\" (UID: \"9889e9bc-b25e-4272-ad18-474c0d2fa2ce\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.243221 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f1aae3-1ca6-43d6-9a83-be14081c28df-serving-cert\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.244318 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4f1aae3-1ca6-43d6-9a83-be14081c28df-encryption-config\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.245476 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ff2015-09a3-41e9-be73-b2952f0aebcf-config\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.249017 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8chq"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.249999 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.251797 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4f1aae3-1ca6-43d6-9a83-be14081c28df-etcd-client\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.252465 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.254124 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f1aae3-1ca6-43d6-9a83-be14081c28df-trusted-ca-bundle\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.258212 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dhb9x"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.260864 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d97f5ac-52f0-43fe-9a35-662481fe2c83-metrics-tls\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.261008 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sckkt"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.261545 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.266291 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.266328 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fhx55"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.267324 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4pmt6"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.271626 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.272822 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84ff2015-09a3-41e9-be73-b2952f0aebcf-serving-cert\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.274098 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.276833 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.276880 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.276896 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fhx55"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.276911 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4pmt6"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.276928 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nstbg"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.277015 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.277750 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nstbg"] Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.277811 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nstbg" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.293335 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.313471 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.332675 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.353392 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.374107 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.392852 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.412646 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.432359 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.452160 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.472607 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.492905 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.513068 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.532781 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.552886 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.573666 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.600376 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.613416 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.633329 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.672939 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.694227 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.735705 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.740404 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.753797 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.772676 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.793842 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.813672 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.833579 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.868306 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.872878 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.893618 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.912670 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.950555 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.953924 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.973716 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 06:37:33 crc kubenswrapper[5017]: I0129 06:37:33.994303 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.013649 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.033758 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.052355 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.072682 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.092843 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.112920 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.132752 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.151091 5017 request.go:700] Waited for 1.011599473s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.152379 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.193384 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.212753 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.233193 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246478 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3641c614-3691-442a-95e4-13582cfd16d2-serving-cert\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246518 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-config\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246539 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tcmb\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-kube-api-access-7tcmb\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246557 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-etcd-client\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246574 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64c950af-fa9a-4e12-bc92-05a7ec5864f8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246597 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246614 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w957c\" (UniqueName: \"kubernetes.io/projected/3c5c683b-3a25-4237-9b2c-e2ff822e0080-kube-api-access-w957c\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246636 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-serving-cert\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246707 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e55b317a-9140-4b76-8119-a7de3f95dd34-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246739 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246875 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c5c683b-3a25-4237-9b2c-e2ff822e0080-audit-dir\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.246972 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da4e010-91cf-4920-ae5c-530bb20b2ba2-config\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247108 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-registry-tls\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247314 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-client-ca\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247356 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-oauth-serving-cert\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247443 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sbhv\" (UniqueName: \"kubernetes.io/projected/d21d2c22-5085-4712-a8d5-de95dc8a69b3-kube-api-access-9sbhv\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247477 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5m6g\" (UniqueName: \"kubernetes.io/projected/7da4e010-91cf-4920-ae5c-530bb20b2ba2-kube-api-access-p5m6g\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247496 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57af192f-0ed1-4738-b13e-a534669cdcaf-config\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247513 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64c950af-fa9a-4e12-bc92-05a7ec5864f8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247532 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-config\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247551 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/64c950af-fa9a-4e12-bc92-05a7ec5864f8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247581 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hck9l\" (UniqueName: \"kubernetes.io/projected/0e3a4b4e-acd1-426e-8d48-f2555ced71ec-kube-api-access-hck9l\") pod \"downloads-7954f5f757-9nv7f\" (UID: \"0e3a4b4e-acd1-426e-8d48-f2555ced71ec\") " pod="openshift-console/downloads-7954f5f757-9nv7f" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247597 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e55b317a-9140-4b76-8119-a7de3f95dd34-serving-cert\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247615 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-oauth-config\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247642 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-client-ca\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247661 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-trusted-ca-bundle\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247686 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57af192f-0ed1-4738-b13e-a534669cdcaf-serving-cert\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247701 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/57af192f-0ed1-4738-b13e-a534669cdcaf-etcd-ca\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247740 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247763 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2beee4dd-2230-42f4-a6a7-6d459f7564b5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-44ss4\" (UID: \"2beee4dd-2230-42f4-a6a7-6d459f7564b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247781 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-service-ca\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.247801 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:34.747781082 +0000 UTC m=+141.122228692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247835 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d05265b-0d73-42c3-be6a-12198c0109de-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247859 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-bound-sa-token\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.247881 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-config\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248025 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29cgg\" (UniqueName: \"kubernetes.io/projected/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-kube-api-access-29cgg\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248085 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9lr8\" (UniqueName: \"kubernetes.io/projected/e55b317a-9140-4b76-8119-a7de3f95dd34-kube-api-access-d9lr8\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248137 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-images\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248180 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/57af192f-0ed1-4738-b13e-a534669cdcaf-etcd-service-ca\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248234 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6rpd\" (UniqueName: \"kubernetes.io/projected/57af192f-0ed1-4738-b13e-a534669cdcaf-kube-api-access-r6rpd\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248321 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248375 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-audit-policies\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248412 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d05265b-0d73-42c3-be6a-12198c0109de-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248439 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-registry-certificates\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248468 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dsg\" (UniqueName: \"kubernetes.io/projected/23943ec6-beb6-4bef-b4b1-e5c840ab997b-kube-api-access-j5dsg\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248493 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-encryption-config\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248542 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57af192f-0ed1-4738-b13e-a534669cdcaf-etcd-client\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248593 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-trusted-ca\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248614 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxj4x\" (UniqueName: \"kubernetes.io/projected/112cd557-09e1-4599-ba7c-42a24407956f-kube-api-access-pxj4x\") pod \"dns-operator-744455d44c-plx6n\" (UID: \"112cd557-09e1-4599-ba7c-42a24407956f\") " pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248661 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7da4e010-91cf-4920-ae5c-530bb20b2ba2-auth-proxy-config\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248679 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7da4e010-91cf-4920-ae5c-530bb20b2ba2-machine-approver-tls\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248708 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4crh5\" (UniqueName: \"kubernetes.io/projected/3641c614-3691-442a-95e4-13582cfd16d2-kube-api-access-4crh5\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248728 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-config\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248745 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248834 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdwj9\" (UniqueName: \"kubernetes.io/projected/2beee4dd-2230-42f4-a6a7-6d459f7564b5-kube-api-access-fdwj9\") pod \"cluster-samples-operator-665b6dd947-44ss4\" (UID: \"2beee4dd-2230-42f4-a6a7-6d459f7564b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.248893 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/112cd557-09e1-4599-ba7c-42a24407956f-metrics-tls\") pod \"dns-operator-744455d44c-plx6n\" (UID: \"112cd557-09e1-4599-ba7c-42a24407956f\") " pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.249240 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2w5f\" (UniqueName: \"kubernetes.io/projected/64c950af-fa9a-4e12-bc92-05a7ec5864f8-kube-api-access-z2w5f\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.249440 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-serving-cert\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.249560 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d21d2c22-5085-4712-a8d5-de95dc8a69b3-serving-cert\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.258999 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.273129 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.292148 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.313510 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.333693 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350322 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.350482 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:34.850444874 +0000 UTC m=+141.224892504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350623 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d21d2c22-5085-4712-a8d5-de95dc8a69b3-serving-cert\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350681 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/81866aa9-0a71-4fb1-8354-2cf5089e9e19-signing-cabundle\") pod \"service-ca-9c57cc56f-k9gnr\" (UID: \"81866aa9-0a71-4fb1-8354-2cf5089e9e19\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350718 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/af26b99f-54ed-4730-91b2-a13130823631-srv-cert\") pod \"olm-operator-6b444d44fb-l5qpt\" (UID: \"af26b99f-54ed-4730-91b2-a13130823631\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350772 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3641c614-3691-442a-95e4-13582cfd16d2-serving-cert\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350804 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-config\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350837 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ace874-46a3-4668-a109-845a7d4a75e7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ksz9z\" (UID: \"16ace874-46a3-4668-a109-845a7d4a75e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350871 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x5gc\" (UniqueName: \"kubernetes.io/projected/af26b99f-54ed-4730-91b2-a13130823631-kube-api-access-7x5gc\") pod \"olm-operator-6b444d44fb-l5qpt\" (UID: \"af26b99f-54ed-4730-91b2-a13130823631\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350904 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2495db50-8f94-4c1e-a23c-c16a3ee22bba-metrics-tls\") pod \"dns-default-fhx55\" (UID: \"2495db50-8f94-4c1e-a23c-c16a3ee22bba\") " pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.350939 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw7ss\" (UniqueName: \"kubernetes.io/projected/81866aa9-0a71-4fb1-8354-2cf5089e9e19-kube-api-access-sw7ss\") pod \"service-ca-9c57cc56f-k9gnr\" (UID: \"81866aa9-0a71-4fb1-8354-2cf5089e9e19\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351102 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-serving-cert\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351169 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ab0da1e-4133-488b-9472-83bde1f3bd25-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kvgh9\" (UID: \"5ab0da1e-4133-488b-9472-83bde1f3bd25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351199 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e55b317a-9140-4b76-8119-a7de3f95dd34-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351572 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e55b317a-9140-4b76-8119-a7de3f95dd34-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351626 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ce02582-da6e-4ba4-8c7b-a1e1132eff04-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cdswz\" (UID: \"9ce02582-da6e-4ba4-8c7b-a1e1132eff04\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351664 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c5c683b-3a25-4237-9b2c-e2ff822e0080-audit-dir\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351684 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5df97138-ff6c-44b4-ac93-9136235d5888-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351738 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c5c683b-3a25-4237-9b2c-e2ff822e0080-audit-dir\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351701 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/547c672d-11fe-48d2-857e-27d7ac4e1fc9-tmpfs\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351810 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-registry-tls\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351847 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-dir\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351870 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-config-volume\") pod \"collect-profiles-29494470-2sw2z\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351894 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8nt\" (UniqueName: \"kubernetes.io/projected/f3ba182d-28e7-4be4-8ae1-68d140e5e285-kube-api-access-dj8nt\") pod \"ingress-canary-nstbg\" (UID: \"f3ba182d-28e7-4be4-8ae1-68d140e5e285\") " pod="openshift-ingress-canary/ingress-canary-nstbg" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.351917 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1a99fb5-b091-4517-8519-59e68cd2366e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k92pt\" (UID: \"f1a99fb5-b091-4517-8519-59e68cd2366e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352016 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352065 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddkc\" (UniqueName: \"kubernetes.io/projected/283f0328-3eac-4df0-8933-4e3f7d823fa9-kube-api-access-dddkc\") pod \"service-ca-operator-777779d784-5jhwr\" (UID: \"283f0328-3eac-4df0-8933-4e3f7d823fa9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352118 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sbhv\" (UniqueName: \"kubernetes.io/projected/d21d2c22-5085-4712-a8d5-de95dc8a69b3-kube-api-access-9sbhv\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352137 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0f115c11-0ba8-4204-8f68-291e35b90b09-node-bootstrap-token\") pod \"machine-config-server-jnctv\" (UID: \"0f115c11-0ba8-4204-8f68-291e35b90b09\") " pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352155 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkshx\" (UniqueName: \"kubernetes.io/projected/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-kube-api-access-dkshx\") pod \"collect-profiles-29494470-2sw2z\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352171 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9m4\" (UniqueName: \"kubernetes.io/projected/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-kube-api-access-rh9m4\") pod \"marketplace-operator-79b997595-f8chq\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352200 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/64c950af-fa9a-4e12-bc92-05a7ec5864f8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352217 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hck9l\" (UniqueName: \"kubernetes.io/projected/0e3a4b4e-acd1-426e-8d48-f2555ced71ec-kube-api-access-hck9l\") pod \"downloads-7954f5f757-9nv7f\" (UID: \"0e3a4b4e-acd1-426e-8d48-f2555ced71ec\") " pod="openshift-console/downloads-7954f5f757-9nv7f" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352236 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/927c767b-89e3-46c0-b5a1-02edc60a959c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352257 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea36b9ef-bda8-411e-9d60-79e9ed3b514b-srv-cert\") pod \"catalog-operator-68c6474976-8pr49\" (UID: \"ea36b9ef-bda8-411e-9d60-79e9ed3b514b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352284 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/927c767b-89e3-46c0-b5a1-02edc60a959c-service-ca-bundle\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352300 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2495db50-8f94-4c1e-a23c-c16a3ee22bba-config-volume\") pod \"dns-default-fhx55\" (UID: \"2495db50-8f94-4c1e-a23c-c16a3ee22bba\") " pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352582 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5db2271c-c63c-4066-b91d-3f132768eb09-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-777x9\" (UID: \"5db2271c-c63c-4066-b91d-3f132768eb09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352666 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57af192f-0ed1-4738-b13e-a534669cdcaf-serving-cert\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352713 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352739 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8chq\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352791 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352815 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352837 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-service-ca\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352856 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-config\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352875 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c666e0f9-29fc-4765-8d2c-0d7b5e1545fd-proxy-tls\") pod \"machine-config-controller-84d6567774-lthxv\" (UID: \"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352894 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ace874-46a3-4668-a109-845a7d4a75e7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ksz9z\" (UID: \"16ace874-46a3-4668-a109-845a7d4a75e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352918 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0f115c11-0ba8-4204-8f68-291e35b90b09-certs\") pod \"machine-config-server-jnctv\" (UID: \"0f115c11-0ba8-4204-8f68-291e35b90b09\") " pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352935 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrj9k\" (UniqueName: \"kubernetes.io/projected/927c767b-89e3-46c0-b5a1-02edc60a959c-kube-api-access-hrj9k\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352975 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-images\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.352998 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9lr8\" (UniqueName: \"kubernetes.io/projected/e55b317a-9140-4b76-8119-a7de3f95dd34-kube-api-access-d9lr8\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353018 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/57af192f-0ed1-4738-b13e-a534669cdcaf-etcd-service-ca\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353045 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3ba182d-28e7-4be4-8ae1-68d140e5e285-cert\") pod \"ingress-canary-nstbg\" (UID: \"f3ba182d-28e7-4be4-8ae1-68d140e5e285\") " pod="openshift-ingress-canary/ingress-canary-nstbg" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353072 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/927c767b-89e3-46c0-b5a1-02edc60a959c-serving-cert\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353093 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353112 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-csi-data-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353137 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6rpd\" (UniqueName: \"kubernetes.io/projected/57af192f-0ed1-4738-b13e-a534669cdcaf-kube-api-access-r6rpd\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353158 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/283f0328-3eac-4df0-8933-4e3f7d823fa9-config\") pod \"service-ca-operator-777779d784-5jhwr\" (UID: \"283f0328-3eac-4df0-8933-4e3f7d823fa9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353179 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353198 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-audit-policies\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353215 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-encryption-config\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353244 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dsg\" (UniqueName: \"kubernetes.io/projected/23943ec6-beb6-4bef-b4b1-e5c840ab997b-kube-api-access-j5dsg\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353260 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57af192f-0ed1-4738-b13e-a534669cdcaf-etcd-client\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353277 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7cfw\" (UniqueName: \"kubernetes.io/projected/c666e0f9-29fc-4765-8d2c-0d7b5e1545fd-kube-api-access-f7cfw\") pod \"machine-config-controller-84d6567774-lthxv\" (UID: \"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353296 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c666e0f9-29fc-4765-8d2c-0d7b5e1545fd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lthxv\" (UID: \"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353328 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4crh5\" (UniqueName: \"kubernetes.io/projected/3641c614-3691-442a-95e4-13582cfd16d2-kube-api-access-4crh5\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353344 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353361 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/112cd557-09e1-4599-ba7c-42a24407956f-metrics-tls\") pod \"dns-operator-744455d44c-plx6n\" (UID: \"112cd557-09e1-4599-ba7c-42a24407956f\") " pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353378 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpfkb\" (UniqueName: \"kubernetes.io/projected/6bd9b034-cfec-4194-9b45-318ed8625994-kube-api-access-wpfkb\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353394 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-serving-cert\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353411 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvhkp\" (UniqueName: \"kubernetes.io/projected/6e2073b6-4205-42ab-8282-8bb749d7ef3d-kube-api-access-wvhkp\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353437 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75cfd\" (UniqueName: \"kubernetes.io/projected/9ce02582-da6e-4ba4-8c7b-a1e1132eff04-kube-api-access-75cfd\") pod \"multus-admission-controller-857f4d67dd-cdswz\" (UID: \"9ce02582-da6e-4ba4-8c7b-a1e1132eff04\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353458 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64c950af-fa9a-4e12-bc92-05a7ec5864f8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353479 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9pk\" (UniqueName: \"kubernetes.io/projected/5ab0da1e-4133-488b-9472-83bde1f3bd25-kube-api-access-zx9pk\") pod \"control-plane-machine-set-operator-78cbb6b69f-kvgh9\" (UID: \"5ab0da1e-4133-488b-9472-83bde1f3bd25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353499 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e2073b6-4205-42ab-8282-8bb749d7ef3d-metrics-certs\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353519 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tcmb\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-kube-api-access-7tcmb\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353536 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-etcd-client\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353552 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353568 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353585 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w957c\" (UniqueName: \"kubernetes.io/projected/3c5c683b-3a25-4237-9b2c-e2ff822e0080-kube-api-access-w957c\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353601 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927c767b-89e3-46c0-b5a1-02edc60a959c-config\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353617 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-registration-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353646 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353664 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da4e010-91cf-4920-ae5c-530bb20b2ba2-config\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353689 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggs7\" (UniqueName: \"kubernetes.io/projected/0f115c11-0ba8-4204-8f68-291e35b90b09-kube-api-access-pggs7\") pod \"machine-config-server-jnctv\" (UID: \"0f115c11-0ba8-4204-8f68-291e35b90b09\") " pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353707 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-client-ca\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353723 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-oauth-serving-cert\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353743 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ccwx\" (UniqueName: \"kubernetes.io/projected/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-kube-api-access-2ccwx\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353805 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24fd\" (UniqueName: \"kubernetes.io/projected/c94822af-25f0-4380-9193-1554ff518daf-kube-api-access-m24fd\") pod \"package-server-manager-789f6589d5-chjv7\" (UID: \"c94822af-25f0-4380-9193-1554ff518daf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.353922 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.354254 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-service-ca\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.354728 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-oauth-serving-cert\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.354868 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:34.85482933 +0000 UTC m=+141.229276990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.355251 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/57af192f-0ed1-4738-b13e-a534669cdcaf-etcd-service-ca\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.356306 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-registry-tls\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.356878 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e2073b6-4205-42ab-8282-8bb749d7ef3d-service-ca-bundle\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.356925 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db2271c-c63c-4066-b91d-3f132768eb09-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-777x9\" (UID: \"5db2271c-c63c-4066-b91d-3f132768eb09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357019 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5m6g\" (UniqueName: \"kubernetes.io/projected/7da4e010-91cf-4920-ae5c-530bb20b2ba2-kube-api-access-p5m6g\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357048 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57af192f-0ed1-4738-b13e-a534669cdcaf-config\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357075 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64c950af-fa9a-4e12-bc92-05a7ec5864f8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357103 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/af26b99f-54ed-4730-91b2-a13130823631-profile-collector-cert\") pod \"olm-operator-6b444d44fb-l5qpt\" (UID: \"af26b99f-54ed-4730-91b2-a13130823631\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357252 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/283f0328-3eac-4df0-8933-4e3f7d823fa9-serving-cert\") pod \"service-ca-operator-777779d784-5jhwr\" (UID: \"283f0328-3eac-4df0-8933-4e3f7d823fa9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357282 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56djz\" (UniqueName: \"kubernetes.io/projected/2495db50-8f94-4c1e-a23c-c16a3ee22bba-kube-api-access-56djz\") pod \"dns-default-fhx55\" (UID: \"2495db50-8f94-4c1e-a23c-c16a3ee22bba\") " pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357307 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84zx\" (UniqueName: \"kubernetes.io/projected/ea36b9ef-bda8-411e-9d60-79e9ed3b514b-kube-api-access-v84zx\") pod \"catalog-operator-68c6474976-8pr49\" (UID: \"ea36b9ef-bda8-411e-9d60-79e9ed3b514b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357570 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-config\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357637 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688f2e96-86c1-45db-a699-28269469f6f0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j7bhs\" (UID: \"688f2e96-86c1-45db-a699-28269469f6f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357758 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg4kz\" (UniqueName: \"kubernetes.io/projected/16ace874-46a3-4668-a109-845a7d4a75e7-kube-api-access-qg4kz\") pod \"openshift-apiserver-operator-796bbdcf4f-ksz9z\" (UID: \"16ace874-46a3-4668-a109-845a7d4a75e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357817 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjmq4\" (UniqueName: \"kubernetes.io/projected/547c672d-11fe-48d2-857e-27d7ac4e1fc9-kube-api-access-vjmq4\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357839 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8chq\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357842 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57af192f-0ed1-4738-b13e-a534669cdcaf-serving-cert\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357882 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e55b317a-9140-4b76-8119-a7de3f95dd34-serving-cert\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357910 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-secret-volume\") pod \"collect-profiles-29494470-2sw2z\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357903 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/64c950af-fa9a-4e12-bc92-05a7ec5864f8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.357981 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-mountpoint-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.358145 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-oauth-config\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.358230 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688f2e96-86c1-45db-a699-28269469f6f0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j7bhs\" (UID: \"688f2e96-86c1-45db-a699-28269469f6f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.358333 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a99fb5-b091-4517-8519-59e68cd2366e-config\") pod \"kube-apiserver-operator-766d6c64bb-k92pt\" (UID: \"f1a99fb5-b091-4517-8519-59e68cd2366e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.358461 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6e2073b6-4205-42ab-8282-8bb749d7ef3d-default-certificate\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.358490 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/547c672d-11fe-48d2-857e-27d7ac4e1fc9-webhook-cert\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.358535 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5df97138-ff6c-44b4-ac93-9136235d5888-images\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.358665 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-client-ca\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.358853 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-trusted-ca-bundle\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.359082 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaa9247-4265-4db5-a2c8-eaa993fd0971-config\") pod \"kube-controller-manager-operator-78b949d7b-tl7pb\" (UID: \"baaa9247-4265-4db5-a2c8-eaa993fd0971\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.359138 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1a99fb5-b091-4517-8519-59e68cd2366e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k92pt\" (UID: \"f1a99fb5-b091-4517-8519-59e68cd2366e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.359267 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/57af192f-0ed1-4738-b13e-a534669cdcaf-etcd-ca\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.359303 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/81866aa9-0a71-4fb1-8354-2cf5089e9e19-signing-key\") pod \"service-ca-9c57cc56f-k9gnr\" (UID: \"81866aa9-0a71-4fb1-8354-2cf5089e9e19\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.359997 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-trusted-ca-bundle\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360117 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-socket-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360165 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baaa9247-4265-4db5-a2c8-eaa993fd0971-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tl7pb\" (UID: \"baaa9247-4265-4db5-a2c8-eaa993fd0971\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360284 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-policies\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360310 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64c950af-fa9a-4e12-bc92-05a7ec5864f8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360344 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2beee4dd-2230-42f4-a6a7-6d459f7564b5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-44ss4\" (UID: \"2beee4dd-2230-42f4-a6a7-6d459f7564b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360390 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c94822af-25f0-4380-9193-1554ff518daf-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-chjv7\" (UID: \"c94822af-25f0-4380-9193-1554ff518daf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360430 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360455 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360500 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea36b9ef-bda8-411e-9d60-79e9ed3b514b-profile-collector-cert\") pod \"catalog-operator-68c6474976-8pr49\" (UID: \"ea36b9ef-bda8-411e-9d60-79e9ed3b514b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360505 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/57af192f-0ed1-4738-b13e-a534669cdcaf-etcd-ca\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360538 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d05265b-0d73-42c3-be6a-12198c0109de-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360570 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-bound-sa-token\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360680 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29cgg\" (UniqueName: \"kubernetes.io/projected/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-kube-api-access-29cgg\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360784 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5df97138-ff6c-44b4-ac93-9136235d5888-proxy-tls\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360821 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwj2z\" (UniqueName: \"kubernetes.io/projected/688f2e96-86c1-45db-a699-28269469f6f0-kube-api-access-rwj2z\") pod \"kube-storage-version-migrator-operator-b67b599dd-j7bhs\" (UID: \"688f2e96-86c1-45db-a699-28269469f6f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360851 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360878 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-plugins-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360910 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/547c672d-11fe-48d2-857e-27d7ac4e1fc9-apiservice-cert\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360927 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-client-ca\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.360943 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d05265b-0d73-42c3-be6a-12198c0109de-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.361041 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-registry-certificates\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.361102 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxj4x\" (UniqueName: \"kubernetes.io/projected/112cd557-09e1-4599-ba7c-42a24407956f-kube-api-access-pxj4x\") pod \"dns-operator-744455d44c-plx6n\" (UID: \"112cd557-09e1-4599-ba7c-42a24407956f\") " pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.361133 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7da4e010-91cf-4920-ae5c-530bb20b2ba2-auth-proxy-config\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.361260 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7da4e010-91cf-4920-ae5c-530bb20b2ba2-machine-approver-tls\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.361462 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6e2073b6-4205-42ab-8282-8bb749d7ef3d-stats-auth\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.361531 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-trusted-ca\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.361572 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9fv4\" (UniqueName: \"kubernetes.io/projected/5df97138-ff6c-44b4-ac93-9136235d5888-kube-api-access-p9fv4\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.361691 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-config\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.362222 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.362267 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baaa9247-4265-4db5-a2c8-eaa993fd0971-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tl7pb\" (UID: \"baaa9247-4265-4db5-a2c8-eaa993fd0971\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.362350 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5db2271c-c63c-4066-b91d-3f132768eb09-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-777x9\" (UID: \"5db2271c-c63c-4066-b91d-3f132768eb09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.362409 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdwj9\" (UniqueName: \"kubernetes.io/projected/2beee4dd-2230-42f4-a6a7-6d459f7564b5-kube-api-access-fdwj9\") pod \"cluster-samples-operator-665b6dd947-44ss4\" (UID: \"2beee4dd-2230-42f4-a6a7-6d459f7564b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.362460 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2w5f\" (UniqueName: \"kubernetes.io/projected/64c950af-fa9a-4e12-bc92-05a7ec5864f8-kube-api-access-z2w5f\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.362656 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.362691 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.362845 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-registry-certificates\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.363174 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.363207 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d05265b-0d73-42c3-be6a-12198c0109de-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.363422 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-config\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.364025 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57af192f-0ed1-4738-b13e-a534669cdcaf-config\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.364574 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/112cd557-09e1-4599-ba7c-42a24407956f-metrics-tls\") pod \"dns-operator-744455d44c-plx6n\" (UID: \"112cd557-09e1-4599-ba7c-42a24407956f\") " pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.365188 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-trusted-ca\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.365649 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-serving-cert\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.366988 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-serving-cert\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.367262 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-etcd-client\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.367916 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d05265b-0d73-42c3-be6a-12198c0109de-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.368842 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57af192f-0ed1-4738-b13e-a534669cdcaf-etcd-client\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.369043 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-oauth-config\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.372628 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.392997 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.413028 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.432350 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.453303 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464193 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.464389 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:34.9643566 +0000 UTC m=+141.338804210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464473 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-socket-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464514 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-policies\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464534 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baaa9247-4265-4db5-a2c8-eaa993fd0971-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tl7pb\" (UID: \"baaa9247-4265-4db5-a2c8-eaa993fd0971\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464560 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c94822af-25f0-4380-9193-1554ff518daf-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-chjv7\" (UID: \"c94822af-25f0-4380-9193-1554ff518daf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464587 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464612 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464631 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea36b9ef-bda8-411e-9d60-79e9ed3b514b-profile-collector-cert\") pod \"catalog-operator-68c6474976-8pr49\" (UID: \"ea36b9ef-bda8-411e-9d60-79e9ed3b514b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464655 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5df97138-ff6c-44b4-ac93-9136235d5888-proxy-tls\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464684 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwj2z\" (UniqueName: \"kubernetes.io/projected/688f2e96-86c1-45db-a699-28269469f6f0-kube-api-access-rwj2z\") pod \"kube-storage-version-migrator-operator-b67b599dd-j7bhs\" (UID: \"688f2e96-86c1-45db-a699-28269469f6f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464709 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-plugins-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464725 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/547c672d-11fe-48d2-857e-27d7ac4e1fc9-apiservice-cert\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464759 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6e2073b6-4205-42ab-8282-8bb749d7ef3d-stats-auth\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464779 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9fv4\" (UniqueName: \"kubernetes.io/projected/5df97138-ff6c-44b4-ac93-9136235d5888-kube-api-access-p9fv4\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464796 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baaa9247-4265-4db5-a2c8-eaa993fd0971-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tl7pb\" (UID: \"baaa9247-4265-4db5-a2c8-eaa993fd0971\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464822 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5db2271c-c63c-4066-b91d-3f132768eb09-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-777x9\" (UID: \"5db2271c-c63c-4066-b91d-3f132768eb09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464849 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464871 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464903 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/81866aa9-0a71-4fb1-8354-2cf5089e9e19-signing-cabundle\") pod \"service-ca-9c57cc56f-k9gnr\" (UID: \"81866aa9-0a71-4fb1-8354-2cf5089e9e19\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464923 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/af26b99f-54ed-4730-91b2-a13130823631-srv-cert\") pod \"olm-operator-6b444d44fb-l5qpt\" (UID: \"af26b99f-54ed-4730-91b2-a13130823631\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464970 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ace874-46a3-4668-a109-845a7d4a75e7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ksz9z\" (UID: \"16ace874-46a3-4668-a109-845a7d4a75e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.464991 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2495db50-8f94-4c1e-a23c-c16a3ee22bba-metrics-tls\") pod \"dns-default-fhx55\" (UID: \"2495db50-8f94-4c1e-a23c-c16a3ee22bba\") " pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465011 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x5gc\" (UniqueName: \"kubernetes.io/projected/af26b99f-54ed-4730-91b2-a13130823631-kube-api-access-7x5gc\") pod \"olm-operator-6b444d44fb-l5qpt\" (UID: \"af26b99f-54ed-4730-91b2-a13130823631\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465030 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw7ss\" (UniqueName: \"kubernetes.io/projected/81866aa9-0a71-4fb1-8354-2cf5089e9e19-kube-api-access-sw7ss\") pod \"service-ca-9c57cc56f-k9gnr\" (UID: \"81866aa9-0a71-4fb1-8354-2cf5089e9e19\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465048 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ab0da1e-4133-488b-9472-83bde1f3bd25-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kvgh9\" (UID: \"5ab0da1e-4133-488b-9472-83bde1f3bd25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465071 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ce02582-da6e-4ba4-8c7b-a1e1132eff04-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cdswz\" (UID: \"9ce02582-da6e-4ba4-8c7b-a1e1132eff04\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465090 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5df97138-ff6c-44b4-ac93-9136235d5888-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465106 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/547c672d-11fe-48d2-857e-27d7ac4e1fc9-tmpfs\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465127 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1a99fb5-b091-4517-8519-59e68cd2366e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k92pt\" (UID: \"f1a99fb5-b091-4517-8519-59e68cd2366e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465153 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-dir\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465171 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-config-volume\") pod \"collect-profiles-29494470-2sw2z\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465197 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj8nt\" (UniqueName: \"kubernetes.io/projected/f3ba182d-28e7-4be4-8ae1-68d140e5e285-kube-api-access-dj8nt\") pod \"ingress-canary-nstbg\" (UID: \"f3ba182d-28e7-4be4-8ae1-68d140e5e285\") " pod="openshift-ingress-canary/ingress-canary-nstbg" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465231 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465257 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dddkc\" (UniqueName: \"kubernetes.io/projected/283f0328-3eac-4df0-8933-4e3f7d823fa9-kube-api-access-dddkc\") pod \"service-ca-operator-777779d784-5jhwr\" (UID: \"283f0328-3eac-4df0-8933-4e3f7d823fa9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465288 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0f115c11-0ba8-4204-8f68-291e35b90b09-node-bootstrap-token\") pod \"machine-config-server-jnctv\" (UID: \"0f115c11-0ba8-4204-8f68-291e35b90b09\") " pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465313 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkshx\" (UniqueName: \"kubernetes.io/projected/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-kube-api-access-dkshx\") pod \"collect-profiles-29494470-2sw2z\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465333 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh9m4\" (UniqueName: \"kubernetes.io/projected/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-kube-api-access-rh9m4\") pod \"marketplace-operator-79b997595-f8chq\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465362 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/927c767b-89e3-46c0-b5a1-02edc60a959c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465379 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea36b9ef-bda8-411e-9d60-79e9ed3b514b-srv-cert\") pod \"catalog-operator-68c6474976-8pr49\" (UID: \"ea36b9ef-bda8-411e-9d60-79e9ed3b514b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465399 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/927c767b-89e3-46c0-b5a1-02edc60a959c-service-ca-bundle\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465416 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2495db50-8f94-4c1e-a23c-c16a3ee22bba-config-volume\") pod \"dns-default-fhx55\" (UID: \"2495db50-8f94-4c1e-a23c-c16a3ee22bba\") " pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465429 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-socket-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465442 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8chq\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465464 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5db2271c-c63c-4066-b91d-3f132768eb09-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-777x9\" (UID: \"5db2271c-c63c-4066-b91d-3f132768eb09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465469 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-policies\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465490 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.465501 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466028 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466069 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c666e0f9-29fc-4765-8d2c-0d7b5e1545fd-proxy-tls\") pod \"machine-config-controller-84d6567774-lthxv\" (UID: \"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466095 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrj9k\" (UniqueName: \"kubernetes.io/projected/927c767b-89e3-46c0-b5a1-02edc60a959c-kube-api-access-hrj9k\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466115 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ace874-46a3-4668-a109-845a7d4a75e7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ksz9z\" (UID: \"16ace874-46a3-4668-a109-845a7d4a75e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466144 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0f115c11-0ba8-4204-8f68-291e35b90b09-certs\") pod \"machine-config-server-jnctv\" (UID: \"0f115c11-0ba8-4204-8f68-291e35b90b09\") " pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466182 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3ba182d-28e7-4be4-8ae1-68d140e5e285-cert\") pod \"ingress-canary-nstbg\" (UID: \"f3ba182d-28e7-4be4-8ae1-68d140e5e285\") " pod="openshift-ingress-canary/ingress-canary-nstbg" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466205 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/927c767b-89e3-46c0-b5a1-02edc60a959c-serving-cert\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466226 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466249 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-csi-data-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466284 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466503 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466529 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/283f0328-3eac-4df0-8933-4e3f7d823fa9-config\") pod \"service-ca-operator-777779d784-5jhwr\" (UID: \"283f0328-3eac-4df0-8933-4e3f7d823fa9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466560 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-plugins-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466581 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7cfw\" (UniqueName: \"kubernetes.io/projected/c666e0f9-29fc-4765-8d2c-0d7b5e1545fd-kube-api-access-f7cfw\") pod \"machine-config-controller-84d6567774-lthxv\" (UID: \"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466604 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466622 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c666e0f9-29fc-4765-8d2c-0d7b5e1545fd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lthxv\" (UID: \"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466665 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpfkb\" (UniqueName: \"kubernetes.io/projected/6bd9b034-cfec-4194-9b45-318ed8625994-kube-api-access-wpfkb\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.466770 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-config-volume\") pod \"collect-profiles-29494470-2sw2z\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.468170 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/927c767b-89e3-46c0-b5a1-02edc60a959c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.468894 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.469241 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.469712 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/81866aa9-0a71-4fb1-8354-2cf5089e9e19-signing-cabundle\") pod \"service-ca-9c57cc56f-k9gnr\" (UID: \"81866aa9-0a71-4fb1-8354-2cf5089e9e19\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.469860 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/927c767b-89e3-46c0-b5a1-02edc60a959c-service-ca-bundle\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.469923 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ab0da1e-4133-488b-9472-83bde1f3bd25-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kvgh9\" (UID: \"5ab0da1e-4133-488b-9472-83bde1f3bd25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.470400 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/547c672d-11fe-48d2-857e-27d7ac4e1fc9-tmpfs\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.470796 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ea36b9ef-bda8-411e-9d60-79e9ed3b514b-profile-collector-cert\") pod \"catalog-operator-68c6474976-8pr49\" (UID: \"ea36b9ef-bda8-411e-9d60-79e9ed3b514b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.471113 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.471496 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-dir\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.471601 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.471894 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5df97138-ff6c-44b4-ac93-9136235d5888-proxy-tls\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.472021 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvhkp\" (UniqueName: \"kubernetes.io/projected/6e2073b6-4205-42ab-8282-8bb749d7ef3d-kube-api-access-wvhkp\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.472048 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ace874-46a3-4668-a109-845a7d4a75e7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ksz9z\" (UID: \"16ace874-46a3-4668-a109-845a7d4a75e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.472064 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75cfd\" (UniqueName: \"kubernetes.io/projected/9ce02582-da6e-4ba4-8c7b-a1e1132eff04-kube-api-access-75cfd\") pod \"multus-admission-controller-857f4d67dd-cdswz\" (UID: \"9ce02582-da6e-4ba4-8c7b-a1e1132eff04\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.472110 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx9pk\" (UniqueName: \"kubernetes.io/projected/5ab0da1e-4133-488b-9472-83bde1f3bd25-kube-api-access-zx9pk\") pod \"control-plane-machine-set-operator-78cbb6b69f-kvgh9\" (UID: \"5ab0da1e-4133-488b-9472-83bde1f3bd25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.472140 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e2073b6-4205-42ab-8282-8bb749d7ef3d-metrics-certs\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.472290 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.472346 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927c767b-89e3-46c0-b5a1-02edc60a959c-config\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.472375 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-registration-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.472434 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ea36b9ef-bda8-411e-9d60-79e9ed3b514b-srv-cert\") pod \"catalog-operator-68c6474976-8pr49\" (UID: \"ea36b9ef-bda8-411e-9d60-79e9ed3b514b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.473405 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baaa9247-4265-4db5-a2c8-eaa993fd0971-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tl7pb\" (UID: \"baaa9247-4265-4db5-a2c8-eaa993fd0971\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.473872 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5df97138-ff6c-44b4-ac93-9136235d5888-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.474089 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ce02582-da6e-4ba4-8c7b-a1e1132eff04-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cdswz\" (UID: \"9ce02582-da6e-4ba4-8c7b-a1e1132eff04\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.474230 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/547c672d-11fe-48d2-857e-27d7ac4e1fc9-apiservice-cert\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.474294 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-csi-data-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.474791 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c666e0f9-29fc-4765-8d2c-0d7b5e1545fd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lthxv\" (UID: \"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.474853 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-registration-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.474887 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6e2073b6-4205-42ab-8282-8bb749d7ef3d-stats-auth\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.474911 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927c767b-89e3-46c0-b5a1-02edc60a959c-config\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.474930 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.475004 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggs7\" (UniqueName: \"kubernetes.io/projected/0f115c11-0ba8-4204-8f68-291e35b90b09-kube-api-access-pggs7\") pod \"machine-config-server-jnctv\" (UID: \"0f115c11-0ba8-4204-8f68-291e35b90b09\") " pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.475056 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ccwx\" (UniqueName: \"kubernetes.io/projected/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-kube-api-access-2ccwx\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.475243 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:34.975229543 +0000 UTC m=+141.349677153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.475316 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m24fd\" (UniqueName: \"kubernetes.io/projected/c94822af-25f0-4380-9193-1554ff518daf-kube-api-access-m24fd\") pod \"package-server-manager-789f6589d5-chjv7\" (UID: \"c94822af-25f0-4380-9193-1554ff518daf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.475387 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e2073b6-4205-42ab-8282-8bb749d7ef3d-service-ca-bundle\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.475452 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db2271c-c63c-4066-b91d-3f132768eb09-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-777x9\" (UID: \"5db2271c-c63c-4066-b91d-3f132768eb09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.475494 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688f2e96-86c1-45db-a699-28269469f6f0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j7bhs\" (UID: \"688f2e96-86c1-45db-a699-28269469f6f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.475522 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/af26b99f-54ed-4730-91b2-a13130823631-profile-collector-cert\") pod \"olm-operator-6b444d44fb-l5qpt\" (UID: \"af26b99f-54ed-4730-91b2-a13130823631\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.475547 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/283f0328-3eac-4df0-8933-4e3f7d823fa9-serving-cert\") pod \"service-ca-operator-777779d784-5jhwr\" (UID: \"283f0328-3eac-4df0-8933-4e3f7d823fa9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.476004 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.476126 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db2271c-c63c-4066-b91d-3f132768eb09-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-777x9\" (UID: \"5db2271c-c63c-4066-b91d-3f132768eb09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.476176 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.476307 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e2073b6-4205-42ab-8282-8bb749d7ef3d-service-ca-bundle\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.476726 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.476820 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/688f2e96-86c1-45db-a699-28269469f6f0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j7bhs\" (UID: \"688f2e96-86c1-45db-a699-28269469f6f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477102 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56djz\" (UniqueName: \"kubernetes.io/projected/2495db50-8f94-4c1e-a23c-c16a3ee22bba-kube-api-access-56djz\") pod \"dns-default-fhx55\" (UID: \"2495db50-8f94-4c1e-a23c-c16a3ee22bba\") " pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477143 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v84zx\" (UniqueName: \"kubernetes.io/projected/ea36b9ef-bda8-411e-9d60-79e9ed3b514b-kube-api-access-v84zx\") pod \"catalog-operator-68c6474976-8pr49\" (UID: \"ea36b9ef-bda8-411e-9d60-79e9ed3b514b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477196 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg4kz\" (UniqueName: \"kubernetes.io/projected/16ace874-46a3-4668-a109-845a7d4a75e7-kube-api-access-qg4kz\") pod \"openshift-apiserver-operator-796bbdcf4f-ksz9z\" (UID: \"16ace874-46a3-4668-a109-845a7d4a75e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477244 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-secret-volume\") pod \"collect-profiles-29494470-2sw2z\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477277 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjmq4\" (UniqueName: \"kubernetes.io/projected/547c672d-11fe-48d2-857e-27d7ac4e1fc9-kube-api-access-vjmq4\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477305 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8chq\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477359 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-mountpoint-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477385 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a99fb5-b091-4517-8519-59e68cd2366e-config\") pod \"kube-apiserver-operator-766d6c64bb-k92pt\" (UID: \"f1a99fb5-b091-4517-8519-59e68cd2366e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477414 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688f2e96-86c1-45db-a699-28269469f6f0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j7bhs\" (UID: \"688f2e96-86c1-45db-a699-28269469f6f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477444 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6e2073b6-4205-42ab-8282-8bb749d7ef3d-default-certificate\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477469 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/547c672d-11fe-48d2-857e-27d7ac4e1fc9-webhook-cert\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477496 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5df97138-ff6c-44b4-ac93-9136235d5888-images\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477517 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6bd9b034-cfec-4194-9b45-318ed8625994-mountpoint-dir\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477532 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaa9247-4265-4db5-a2c8-eaa993fd0971-config\") pod \"kube-controller-manager-operator-78b949d7b-tl7pb\" (UID: \"baaa9247-4265-4db5-a2c8-eaa993fd0971\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477558 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/81866aa9-0a71-4fb1-8354-2cf5089e9e19-signing-key\") pod \"service-ca-9c57cc56f-k9gnr\" (UID: \"81866aa9-0a71-4fb1-8354-2cf5089e9e19\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477587 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1a99fb5-b091-4517-8519-59e68cd2366e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k92pt\" (UID: \"f1a99fb5-b091-4517-8519-59e68cd2366e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.477679 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.478445 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.478658 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e2073b6-4205-42ab-8282-8bb749d7ef3d-metrics-certs\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.478799 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/af26b99f-54ed-4730-91b2-a13130823631-srv-cert\") pod \"olm-operator-6b444d44fb-l5qpt\" (UID: \"af26b99f-54ed-4730-91b2-a13130823631\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.479024 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5df97138-ff6c-44b4-ac93-9136235d5888-images\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.479623 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaa9247-4265-4db5-a2c8-eaa993fd0971-config\") pod \"kube-controller-manager-operator-78b949d7b-tl7pb\" (UID: \"baaa9247-4265-4db5-a2c8-eaa993fd0971\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.479858 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a99fb5-b091-4517-8519-59e68cd2366e-config\") pod \"kube-apiserver-operator-766d6c64bb-k92pt\" (UID: \"f1a99fb5-b091-4517-8519-59e68cd2366e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.480548 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1a99fb5-b091-4517-8519-59e68cd2366e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k92pt\" (UID: \"f1a99fb5-b091-4517-8519-59e68cd2366e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.480696 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-secret-volume\") pod \"collect-profiles-29494470-2sw2z\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.482697 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/688f2e96-86c1-45db-a699-28269469f6f0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j7bhs\" (UID: \"688f2e96-86c1-45db-a699-28269469f6f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.483076 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/547c672d-11fe-48d2-857e-27d7ac4e1fc9-webhook-cert\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.483339 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.483813 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/af26b99f-54ed-4730-91b2-a13130823631-profile-collector-cert\") pod \"olm-operator-6b444d44fb-l5qpt\" (UID: \"af26b99f-54ed-4730-91b2-a13130823631\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.484553 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5db2271c-c63c-4066-b91d-3f132768eb09-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-777x9\" (UID: \"5db2271c-c63c-4066-b91d-3f132768eb09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.485619 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6e2073b6-4205-42ab-8282-8bb749d7ef3d-default-certificate\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.487548 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/81866aa9-0a71-4fb1-8354-2cf5089e9e19-signing-key\") pod \"service-ca-9c57cc56f-k9gnr\" (UID: \"81866aa9-0a71-4fb1-8354-2cf5089e9e19\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.492860 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.494254 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/927c767b-89e3-46c0-b5a1-02edc60a959c-serving-cert\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.498978 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ace874-46a3-4668-a109-845a7d4a75e7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ksz9z\" (UID: \"16ace874-46a3-4668-a109-845a7d4a75e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.514814 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.525204 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c666e0f9-29fc-4765-8d2c-0d7b5e1545fd-proxy-tls\") pod \"machine-config-controller-84d6567774-lthxv\" (UID: \"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.533734 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.553503 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.573048 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.578876 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.579137 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.07908446 +0000 UTC m=+141.453532110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.580187 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.580382 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c94822af-25f0-4380-9193-1554ff518daf-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-chjv7\" (UID: \"c94822af-25f0-4380-9193-1554ff518daf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.580702 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.080682876 +0000 UTC m=+141.455130486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.600084 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.610450 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8chq\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.613770 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.634064 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.641267 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8chq\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.652874 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.673402 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.681668 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.681909 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.181888067 +0000 UTC m=+141.556335677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.682026 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.682643 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.182631668 +0000 UTC m=+141.557079278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.692910 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.713325 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.733870 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.742636 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/283f0328-3eac-4df0-8933-4e3f7d823fa9-serving-cert\") pod \"service-ca-operator-777779d784-5jhwr\" (UID: \"283f0328-3eac-4df0-8933-4e3f7d823fa9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.752672 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.763907 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/283f0328-3eac-4df0-8933-4e3f7d823fa9-config\") pod \"service-ca-operator-777779d784-5jhwr\" (UID: \"283f0328-3eac-4df0-8933-4e3f7d823fa9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.773587 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.785346 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.786360 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.286312889 +0000 UTC m=+141.660760509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.787107 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.789091 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.289078179 +0000 UTC m=+141.663525789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.796343 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.805736 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0f115c11-0ba8-4204-8f68-291e35b90b09-node-bootstrap-token\") pod \"machine-config-server-jnctv\" (UID: \"0f115c11-0ba8-4204-8f68-291e35b90b09\") " pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.813624 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.826211 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0f115c11-0ba8-4204-8f68-291e35b90b09-certs\") pod \"machine-config-server-jnctv\" (UID: \"0f115c11-0ba8-4204-8f68-291e35b90b09\") " pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.835304 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.869920 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hp88\" (UniqueName: \"kubernetes.io/projected/8d97f5ac-52f0-43fe-9a35-662481fe2c83-kube-api-access-5hp88\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.888154 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.888372 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.388336134 +0000 UTC m=+141.762783784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.888621 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.889322 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.389291951 +0000 UTC m=+141.763739791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.893743 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knrsf\" (UniqueName: \"kubernetes.io/projected/84ff2015-09a3-41e9-be73-b2952f0aebcf-kube-api-access-knrsf\") pod \"console-operator-58897d9998-fbqdg\" (UID: \"84ff2015-09a3-41e9-be73-b2952f0aebcf\") " pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.910522 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d97f5ac-52f0-43fe-9a35-662481fe2c83-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ddv7s\" (UID: \"8d97f5ac-52f0-43fe-9a35-662481fe2c83\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.929252 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tct5v\" (UniqueName: \"kubernetes.io/projected/9889e9bc-b25e-4272-ad18-474c0d2fa2ce-kube-api-access-tct5v\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tqcv\" (UID: \"9889e9bc-b25e-4272-ad18-474c0d2fa2ce\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.954636 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6jbr\" (UniqueName: \"kubernetes.io/projected/c4f1aae3-1ca6-43d6-9a83-be14081c28df-kube-api-access-b6jbr\") pod \"apiserver-76f77b778f-pd9qw\" (UID: \"c4f1aae3-1ca6-43d6-9a83-be14081c28df\") " pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.971250 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2vsq\" (UniqueName: \"kubernetes.io/projected/0d65eec8-87c6-4fd9-9cca-d784e7e25232-kube-api-access-z2vsq\") pod \"migrator-59844c95c7-6tsc9\" (UID: \"0d65eec8-87c6-4fd9-9cca-d784e7e25232\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.972689 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.979863 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2495db50-8f94-4c1e-a23c-c16a3ee22bba-config-volume\") pod \"dns-default-fhx55\" (UID: \"2495db50-8f94-4c1e-a23c-c16a3ee22bba\") " pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.992572 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.992826 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.492786958 +0000 UTC m=+141.867234608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.993209 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:34 crc kubenswrapper[5017]: E0129 06:37:34.993844 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.493825428 +0000 UTC m=+141.868273078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.994260 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 06:37:34 crc kubenswrapper[5017]: I0129 06:37:34.994588 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.013678 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.013693 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.024713 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2495db50-8f94-4c1e-a23c-c16a3ee22bba-metrics-tls\") pod \"dns-default-fhx55\" (UID: \"2495db50-8f94-4c1e-a23c-c16a3ee22bba\") " pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.033628 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.041094 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.056729 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.073835 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.085874 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3ba182d-28e7-4be4-8ae1-68d140e5e285-cert\") pod \"ingress-canary-nstbg\" (UID: \"f3ba182d-28e7-4be4-8ae1-68d140e5e285\") " pod="openshift-ingress-canary/ingress-canary-nstbg" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.095349 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.095495 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.095587 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.595563563 +0000 UTC m=+141.970011173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.096050 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.097093 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.098282 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.59822334 +0000 UTC m=+141.972670990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.106765 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.116699 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.140142 5017 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.152020 5017 request.go:700] Waited for 1.873842771s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.154646 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.198339 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.198504 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.698467953 +0000 UTC m=+142.072915573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.199642 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.200043 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.700033948 +0000 UTC m=+142.074481558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.200454 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.201975 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7da4e010-91cf-4920-ae5c-530bb20b2ba2-auth-proxy-config\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.216981 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.225323 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da4e010-91cf-4920-ae5c-530bb20b2ba2-config\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.236426 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.255941 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fbqdg"] Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.257280 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.269659 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-pd9qw"] Jan 29 06:37:35 crc kubenswrapper[5017]: W0129 06:37:35.270232 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84ff2015_09a3_41e9_be73_b2952f0aebcf.slice/crio-2e705152000a6c1c9533399d806e5d1f08d1cd99b36d2a323a72902ca41dc64a WatchSource:0}: Error finding container 2e705152000a6c1c9533399d806e5d1f08d1cd99b36d2a323a72902ca41dc64a: Status 404 returned error can't find the container with id 2e705152000a6c1c9533399d806e5d1f08d1cd99b36d2a323a72902ca41dc64a Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.272541 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.281728 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s"] Jan 29 06:37:35 crc kubenswrapper[5017]: W0129 06:37:35.287851 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f1aae3_1ca6_43d6_9a83_be14081c28df.slice/crio-a4182e5d06292a783e3c6fcd565dfbe4479944e7e36ca16e15cbf242245c70a5 WatchSource:0}: Error finding container a4182e5d06292a783e3c6fcd565dfbe4479944e7e36ca16e15cbf242245c70a5: Status 404 returned error can't find the container with id a4182e5d06292a783e3c6fcd565dfbe4479944e7e36ca16e15cbf242245c70a5 Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.292655 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.300520 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.300770 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.800740875 +0000 UTC m=+142.175188485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.304535 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.305516 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.805492121 +0000 UTC m=+142.179939741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.314397 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.332693 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.344070 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv"] Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.351495 5017 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.351592 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3641c614-3691-442a-95e4-13582cfd16d2-serving-cert podName:3641c614-3691-442a-95e4-13582cfd16d2 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.851569616 +0000 UTC m=+142.226017226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3641c614-3691-442a-95e4-13582cfd16d2-serving-cert") pod "route-controller-manager-6576b87f9c-wmnch" (UID: "3641c614-3691-442a-95e4-13582cfd16d2") : failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.351802 5017 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.351921 5017 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.351924 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d21d2c22-5085-4712-a8d5-de95dc8a69b3-serving-cert podName:d21d2c22-5085-4712-a8d5-de95dc8a69b3 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.851899835 +0000 UTC m=+142.226347445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d21d2c22-5085-4712-a8d5-de95dc8a69b3-serving-cert") pod "controller-manager-879f6c89f-5zqqn" (UID: "d21d2c22-5085-4712-a8d5-de95dc8a69b3") : failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.352092 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-config podName:d21d2c22-5085-4712-a8d5-de95dc8a69b3 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.852080131 +0000 UTC m=+142.226527741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-config") pod "controller-manager-879f6c89f-5zqqn" (UID: "d21d2c22-5085-4712-a8d5-de95dc8a69b3") : failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.352804 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.353039 5017 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.353074 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-config podName:250918e3-cdc9-40cb-b390-6dbb5afe9d1f nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.853065089 +0000 UTC m=+142.227512699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-config") pod "machine-api-operator-5694c8668f-cbp4z" (UID: "250918e3-cdc9-40cb-b390-6dbb5afe9d1f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.353122 5017 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.353145 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-images podName:250918e3-cdc9-40cb-b390-6dbb5afe9d1f nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.853138631 +0000 UTC m=+142.227586241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-images") pod "machine-api-operator-5694c8668f-cbp4z" (UID: "250918e3-cdc9-40cb-b390-6dbb5afe9d1f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.353165 5017 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.353210 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-proxy-ca-bundles podName:d21d2c22-5085-4712-a8d5-de95dc8a69b3 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.853202483 +0000 UTC m=+142.227650093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-proxy-ca-bundles") pod "controller-manager-879f6c89f-5zqqn" (UID: "d21d2c22-5085-4712-a8d5-de95dc8a69b3") : failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.354886 5017 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.354939 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-audit-policies podName:3c5c683b-3a25-4237-9b2c-e2ff822e0080 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.854930603 +0000 UTC m=+142.229378213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-audit-policies") pod "apiserver-7bbb656c7d-zlflf" (UID: "3c5c683b-3a25-4237-9b2c-e2ff822e0080") : failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.355018 5017 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.355059 5017 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.355118 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-client-ca podName:3641c614-3691-442a-95e4-13582cfd16d2 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.855096068 +0000 UTC m=+142.229543678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-client-ca") pod "route-controller-manager-6576b87f9c-wmnch" (UID: "3641c614-3691-442a-95e4-13582cfd16d2") : failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.355172 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-encryption-config podName:3c5c683b-3a25-4237-9b2c-e2ff822e0080 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.855138499 +0000 UTC m=+142.229586109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-encryption-config") pod "apiserver-7bbb656c7d-zlflf" (UID: "3c5c683b-3a25-4237-9b2c-e2ff822e0080") : failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.361065 5017 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.361130 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-config podName:3641c614-3691-442a-95e4-13582cfd16d2 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.861117321 +0000 UTC m=+142.235564931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-config") pod "route-controller-manager-6576b87f9c-wmnch" (UID: "3641c614-3691-442a-95e4-13582cfd16d2") : failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.361163 5017 secret.go:188] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.361193 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e55b317a-9140-4b76-8119-a7de3f95dd34-serving-cert podName:e55b317a-9140-4b76-8119-a7de3f95dd34 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.861187183 +0000 UTC m=+142.235634783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e55b317a-9140-4b76-8119-a7de3f95dd34-serving-cert") pod "openshift-config-operator-7777fb866f-n8qjs" (UID: "e55b317a-9140-4b76-8119-a7de3f95dd34") : failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.361229 5017 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.361254 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-etcd-serving-ca podName:3c5c683b-3a25-4237-9b2c-e2ff822e0080 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.861245464 +0000 UTC m=+142.235693074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-etcd-serving-ca") pod "apiserver-7bbb656c7d-zlflf" (UID: "3c5c683b-3a25-4237-9b2c-e2ff822e0080") : failed to sync configmap cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.361269 5017 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.361293 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2beee4dd-2230-42f4-a6a7-6d459f7564b5-samples-operator-tls podName:2beee4dd-2230-42f4-a6a7-6d459f7564b5 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.861285586 +0000 UTC m=+142.235733196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/2beee4dd-2230-42f4-a6a7-6d459f7564b5-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-44ss4" (UID: "2beee4dd-2230-42f4-a6a7-6d459f7564b5") : failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.362138 5017 secret.go:188] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.362268 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7da4e010-91cf-4920-ae5c-530bb20b2ba2-machine-approver-tls podName:7da4e010-91cf-4920-ae5c-530bb20b2ba2 nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.862238423 +0000 UTC m=+142.236686033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/7da4e010-91cf-4920-ae5c-530bb20b2ba2-machine-approver-tls") pod "machine-approver-56656f9798-4kcwn" (UID: "7da4e010-91cf-4920-ae5c-530bb20b2ba2") : failed to sync secret cache: timed out waiting for the condition Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.368364 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9"] Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.379628 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:35 crc kubenswrapper[5017]: W0129 06:37:35.385629 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9889e9bc_b25e_4272_ad18_474c0d2fa2ce.slice/crio-8888c2cbf3f3032e35d4353de43b43c0e080b88321a545b7f68a5a0516d3e1e9 WatchSource:0}: Error finding container 8888c2cbf3f3032e35d4353de43b43c0e080b88321a545b7f68a5a0516d3e1e9: Status 404 returned error can't find the container with id 8888c2cbf3f3032e35d4353de43b43c0e080b88321a545b7f68a5a0516d3e1e9 Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.408120 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.409006 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:35.908988047 +0000 UTC m=+142.283435657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.417337 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.432507 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.432719 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.438177 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.469599 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.482259 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.493407 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.516100 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.516633 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.016606962 +0000 UTC m=+142.391054632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.518145 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.532844 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.552438 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.574585 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.592575 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.612791 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.618605 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.619258 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.119239914 +0000 UTC m=+142.493687524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.633880 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.653232 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.672945 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.720755 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.721134 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.221114204 +0000 UTC m=+142.595561814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.729500 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hck9l\" (UniqueName: \"kubernetes.io/projected/0e3a4b4e-acd1-426e-8d48-f2555ced71ec-kube-api-access-hck9l\") pod \"downloads-7954f5f757-9nv7f\" (UID: \"0e3a4b4e-acd1-426e-8d48-f2555ced71ec\") " pod="openshift-console/downloads-7954f5f757-9nv7f" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.766974 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w957c\" (UniqueName: \"kubernetes.io/projected/3c5c683b-3a25-4237-9b2c-e2ff822e0080-kube-api-access-w957c\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.807281 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6rpd\" (UniqueName: \"kubernetes.io/projected/57af192f-0ed1-4738-b13e-a534669cdcaf-kube-api-access-r6rpd\") pod \"etcd-operator-b45778765-92jgp\" (UID: \"57af192f-0ed1-4738-b13e-a534669cdcaf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.822089 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.822279 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.322250892 +0000 UTC m=+142.696698512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.822820 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.823388 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.323365645 +0000 UTC m=+142.697813255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.828121 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64c950af-fa9a-4e12-bc92-05a7ec5864f8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.847827 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tcmb\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-kube-api-access-7tcmb\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.872321 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dsg\" (UniqueName: \"kubernetes.io/projected/23943ec6-beb6-4bef-b4b1-e5c840ab997b-kube-api-access-j5dsg\") pod \"console-f9d7485db-z5brc\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.883600 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9nv7f" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.914686 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-bound-sa-token\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.924135 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.924350 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-config\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.924404 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.424358639 +0000 UTC m=+142.798806419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.924538 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e55b317a-9140-4b76-8119-a7de3f95dd34-serving-cert\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.924627 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2beee4dd-2230-42f4-a6a7-6d459f7564b5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-44ss4\" (UID: \"2beee4dd-2230-42f4-a6a7-6d459f7564b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.924693 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.924742 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7da4e010-91cf-4920-ae5c-530bb20b2ba2-machine-approver-tls\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.924827 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d21d2c22-5085-4712-a8d5-de95dc8a69b3-serving-cert\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.924851 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3641c614-3691-442a-95e4-13582cfd16d2-serving-cert\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.924881 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-config\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.925058 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.925088 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-config\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.925130 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-images\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.925181 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-audit-policies\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.925209 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-encryption-config\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.925336 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.925381 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-client-ca\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.926376 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-client-ca\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.927581 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-config\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:35 crc kubenswrapper[5017]: E0129 06:37:35.928643 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.428621452 +0000 UTC m=+142.803069212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.929079 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.929260 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c5c683b-3a25-4237-9b2c-e2ff822e0080-audit-policies\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.929095 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.929404 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-images\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.929734 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e55b317a-9140-4b76-8119-a7de3f95dd34-serving-cert\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.930119 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2beee4dd-2230-42f4-a6a7-6d459f7564b5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-44ss4\" (UID: \"2beee4dd-2230-42f4-a6a7-6d459f7564b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.931189 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c5c683b-3a25-4237-9b2c-e2ff822e0080-encryption-config\") pod \"apiserver-7bbb656c7d-zlflf\" (UID: \"3c5c683b-3a25-4237-9b2c-e2ff822e0080\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.931543 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d21d2c22-5085-4712-a8d5-de95dc8a69b3-serving-cert\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.932825 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7da4e010-91cf-4920-ae5c-530bb20b2ba2-machine-approver-tls\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.933134 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3641c614-3691-442a-95e4-13582cfd16d2-serving-cert\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.949323 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxj4x\" (UniqueName: \"kubernetes.io/projected/112cd557-09e1-4599-ba7c-42a24407956f-kube-api-access-pxj4x\") pod \"dns-operator-744455d44c-plx6n\" (UID: \"112cd557-09e1-4599-ba7c-42a24407956f\") " pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.992319 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2w5f\" (UniqueName: \"kubernetes.io/projected/64c950af-fa9a-4e12-bc92-05a7ec5864f8-kube-api-access-z2w5f\") pod \"cluster-image-registry-operator-dc59b4c8b-rbhd6\" (UID: \"64c950af-fa9a-4e12-bc92-05a7ec5864f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.993751 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 06:37:35 crc kubenswrapper[5017]: I0129 06:37:35.995492 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-config\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.012650 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.017632 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.023076 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.026993 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.027677 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.52765164 +0000 UTC m=+142.902099250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.033850 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9lr8\" (UniqueName: \"kubernetes.io/projected/e55b317a-9140-4b76-8119-a7de3f95dd34-kube-api-access-d9lr8\") pod \"openshift-config-operator-7777fb866f-n8qjs\" (UID: \"e55b317a-9140-4b76-8119-a7de3f95dd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.033881 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.036050 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.046249 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdwj9\" (UniqueName: \"kubernetes.io/projected/2beee4dd-2230-42f4-a6a7-6d459f7564b5-kube-api-access-fdwj9\") pod \"cluster-samples-operator-665b6dd947-44ss4\" (UID: \"2beee4dd-2230-42f4-a6a7-6d459f7564b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.062670 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9nv7f"] Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.072740 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x5gc\" (UniqueName: \"kubernetes.io/projected/af26b99f-54ed-4730-91b2-a13130823631-kube-api-access-7x5gc\") pod \"olm-operator-6b444d44fb-l5qpt\" (UID: \"af26b99f-54ed-4730-91b2-a13130823631\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.104063 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj8nt\" (UniqueName: \"kubernetes.io/projected/f3ba182d-28e7-4be4-8ae1-68d140e5e285-kube-api-access-dj8nt\") pod \"ingress-canary-nstbg\" (UID: \"f3ba182d-28e7-4be4-8ae1-68d140e5e285\") " pod="openshift-ingress-canary/ingress-canary-nstbg" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.104559 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.127561 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh9m4\" (UniqueName: \"kubernetes.io/projected/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-kube-api-access-rh9m4\") pod \"marketplace-operator-79b997595-f8chq\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.128820 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.129407 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.629389486 +0000 UTC m=+143.003837106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.138816 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw7ss\" (UniqueName: \"kubernetes.io/projected/81866aa9-0a71-4fb1-8354-2cf5089e9e19-kube-api-access-sw7ss\") pod \"service-ca-9c57cc56f-k9gnr\" (UID: \"81866aa9-0a71-4fb1-8354-2cf5089e9e19\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.148226 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.150674 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baaa9247-4265-4db5-a2c8-eaa993fd0971-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tl7pb\" (UID: \"baaa9247-4265-4db5-a2c8-eaa993fd0971\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.154365 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.169191 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.171234 5017 request.go:700] Waited for 1.704370647s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.176428 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5db2271c-c63c-4066-b91d-3f132768eb09-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-777x9\" (UID: \"5db2271c-c63c-4066-b91d-3f132768eb09\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.193251 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9fv4\" (UniqueName: \"kubernetes.io/projected/5df97138-ff6c-44b4-ac93-9136235d5888-kube-api-access-p9fv4\") pod \"machine-config-operator-74547568cd-z6lr7\" (UID: \"5df97138-ff6c-44b4-ac93-9136235d5888\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.202600 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fbqdg" event={"ID":"84ff2015-09a3-41e9-be73-b2952f0aebcf","Type":"ContainerStarted","Data":"69e834b56e753b1f254eaba4683928f7bb5b861f1b39b3de3c386f38fb3e3ad4"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.202656 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fbqdg" event={"ID":"84ff2015-09a3-41e9-be73-b2952f0aebcf","Type":"ContainerStarted","Data":"2e705152000a6c1c9533399d806e5d1f08d1cd99b36d2a323a72902ca41dc64a"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.203108 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.204811 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.205458 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9nv7f" event={"ID":"0e3a4b4e-acd1-426e-8d48-f2555ced71ec","Type":"ContainerStarted","Data":"7e6bf0690562d997f82a2e43e544a0530cb24d3b9a278155338666780b455ea3"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.206794 5017 generic.go:334] "Generic (PLEG): container finished" podID="c4f1aae3-1ca6-43d6-9a83-be14081c28df" containerID="08ea6b4e4c45d1037707eb7dca7aa949258634e8f5b5211d6b0fee78e7f77c91" exitCode=0 Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.206839 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" event={"ID":"c4f1aae3-1ca6-43d6-9a83-be14081c28df","Type":"ContainerDied","Data":"08ea6b4e4c45d1037707eb7dca7aa949258634e8f5b5211d6b0fee78e7f77c91"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.206860 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" event={"ID":"c4f1aae3-1ca6-43d6-9a83-be14081c28df","Type":"ContainerStarted","Data":"a4182e5d06292a783e3c6fcd565dfbe4479944e7e36ca16e15cbf242245c70a5"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.214618 5017 patch_prober.go:28] interesting pod/console-operator-58897d9998-fbqdg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.214680 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fbqdg" podUID="84ff2015-09a3-41e9-be73-b2952f0aebcf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.216033 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dddkc\" (UniqueName: \"kubernetes.io/projected/283f0328-3eac-4df0-8933-4e3f7d823fa9-kube-api-access-dddkc\") pod \"service-ca-operator-777779d784-5jhwr\" (UID: \"283f0328-3eac-4df0-8933-4e3f7d823fa9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.217614 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.219676 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" event={"ID":"9889e9bc-b25e-4272-ad18-474c0d2fa2ce","Type":"ContainerStarted","Data":"f0b5aaec0f2a19ab336dafc0abf644219054e55b58455f3fcb8335c0c94dbaed"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.219741 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" event={"ID":"9889e9bc-b25e-4272-ad18-474c0d2fa2ce","Type":"ContainerStarted","Data":"8888c2cbf3f3032e35d4353de43b43c0e080b88321a545b7f68a5a0516d3e1e9"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.226355 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9" event={"ID":"0d65eec8-87c6-4fd9-9cca-d784e7e25232","Type":"ContainerStarted","Data":"3e2703daf4772be8b909e89ca013912a507deaf0d0b5540a2a9a0c7a1933f19d"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.226511 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9" event={"ID":"0d65eec8-87c6-4fd9-9cca-d784e7e25232","Type":"ContainerStarted","Data":"756564afb86dda83efc583c3dde655fca25e7474df11873c42d21c1a28549515"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.226531 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9" event={"ID":"0d65eec8-87c6-4fd9-9cca-d784e7e25232","Type":"ContainerStarted","Data":"a5b0244b9fb7401f6dbb21cd97c3abc0313b0495353548e1cf312580fb19c8a3"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.229771 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwj2z\" (UniqueName: \"kubernetes.io/projected/688f2e96-86c1-45db-a699-28269469f6f0-kube-api-access-rwj2z\") pod \"kube-storage-version-migrator-operator-b67b599dd-j7bhs\" (UID: \"688f2e96-86c1-45db-a699-28269469f6f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.231693 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.234355 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.235260 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.73523632 +0000 UTC m=+143.109683930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.238635 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.239301 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" event={"ID":"8d97f5ac-52f0-43fe-9a35-662481fe2c83","Type":"ContainerStarted","Data":"88351a9160910fdcc7473fef723c52b80c04654383a18b40f32f839d8ab8c5b3"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.239373 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" event={"ID":"8d97f5ac-52f0-43fe-9a35-662481fe2c83","Type":"ContainerStarted","Data":"0cb71accd44e4e0668e14cfaadf46d0a597af54c44da0107080b463bfde0abc2"} Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.239384 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" event={"ID":"8d97f5ac-52f0-43fe-9a35-662481fe2c83","Type":"ContainerStarted","Data":"e47d988097fba5b9049616f66cafcd223fb2d49ec8167f5af547160b413f619d"} Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.239729 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.739697458 +0000 UTC m=+143.114145068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.257503 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-92jgp"] Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.263845 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkshx\" (UniqueName: \"kubernetes.io/projected/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-kube-api-access-dkshx\") pod \"collect-profiles-29494470-2sw2z\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.265473 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nstbg" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.268274 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrj9k\" (UniqueName: \"kubernetes.io/projected/927c767b-89e3-46c0-b5a1-02edc60a959c-kube-api-access-hrj9k\") pod \"authentication-operator-69f744f599-dhb9x\" (UID: \"927c767b-89e3-46c0-b5a1-02edc60a959c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:36 crc kubenswrapper[5017]: W0129 06:37:36.284590 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57af192f_0ed1_4738_b13e_a534669cdcaf.slice/crio-81200d4b644952ce82a9cb4022c5de0cea7b48b49c35fab5086c06b597e48e1d WatchSource:0}: Error finding container 81200d4b644952ce82a9cb4022c5de0cea7b48b49c35fab5086c06b597e48e1d: Status 404 returned error can't find the container with id 81200d4b644952ce82a9cb4022c5de0cea7b48b49c35fab5086c06b597e48e1d Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.312600 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpfkb\" (UniqueName: \"kubernetes.io/projected/6bd9b034-cfec-4194-9b45-318ed8625994-kube-api-access-wpfkb\") pod \"csi-hostpathplugin-4pmt6\" (UID: \"6bd9b034-cfec-4194-9b45-318ed8625994\") " pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.352984 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.354423 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.356975 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.856889078 +0000 UTC m=+143.231336688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.362089 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.363182 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvhkp\" (UniqueName: \"kubernetes.io/projected/6e2073b6-4205-42ab-8282-8bb749d7ef3d-kube-api-access-wvhkp\") pod \"router-default-5444994796-rlfl6\" (UID: \"6e2073b6-4205-42ab-8282-8bb749d7ef3d\") " pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.370598 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.373203 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75cfd\" (UniqueName: \"kubernetes.io/projected/9ce02582-da6e-4ba4-8c7b-a1e1132eff04-kube-api-access-75cfd\") pod \"multus-admission-controller-857f4d67dd-cdswz\" (UID: \"9ce02582-da6e-4ba4-8c7b-a1e1132eff04\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.378329 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.392815 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7cfw\" (UniqueName: \"kubernetes.io/projected/c666e0f9-29fc-4765-8d2c-0d7b5e1545fd-kube-api-access-f7cfw\") pod \"machine-config-controller-84d6567774-lthxv\" (UID: \"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.395404 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.397439 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1a99fb5-b091-4517-8519-59e68cd2366e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k92pt\" (UID: \"f1a99fb5-b091-4517-8519-59e68cd2366e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.421386 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.430761 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.435298 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.439721 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggs7\" (UniqueName: \"kubernetes.io/projected/0f115c11-0ba8-4204-8f68-291e35b90b09-kube-api-access-pggs7\") pod \"machine-config-server-jnctv\" (UID: \"0f115c11-0ba8-4204-8f68-291e35b90b09\") " pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.447761 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29cgg\" (UniqueName: \"kubernetes.io/projected/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-kube-api-access-29cgg\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.447798 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx9pk\" (UniqueName: \"kubernetes.io/projected/5ab0da1e-4133-488b-9472-83bde1f3bd25-kube-api-access-zx9pk\") pod \"control-plane-machine-set-operator-78cbb6b69f-kvgh9\" (UID: \"5ab0da1e-4133-488b-9472-83bde1f3bd25\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.448118 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.450770 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ccwx\" (UniqueName: \"kubernetes.io/projected/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-kube-api-access-2ccwx\") pod \"oauth-openshift-558db77b4-pws6m\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.459914 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.460397 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:36.960383105 +0000 UTC m=+143.334830715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.490355 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.490563 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg4kz\" (UniqueName: \"kubernetes.io/projected/16ace874-46a3-4668-a109-845a7d4a75e7-kube-api-access-qg4kz\") pod \"openshift-apiserver-operator-796bbdcf4f-ksz9z\" (UID: \"16ace874-46a3-4668-a109-845a7d4a75e7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.490644 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m24fd\" (UniqueName: \"kubernetes.io/projected/c94822af-25f0-4380-9193-1554ff518daf-kube-api-access-m24fd\") pod \"package-server-manager-789f6589d5-chjv7\" (UID: \"c94822af-25f0-4380-9193-1554ff518daf\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.497356 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.516996 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56djz\" (UniqueName: \"kubernetes.io/projected/2495db50-8f94-4c1e-a23c-c16a3ee22bba-kube-api-access-56djz\") pod \"dns-default-fhx55\" (UID: \"2495db50-8f94-4c1e-a23c-c16a3ee22bba\") " pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.527565 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jnctv" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.529139 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z5brc"] Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.533247 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.536018 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6"] Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.544803 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84zx\" (UniqueName: \"kubernetes.io/projected/ea36b9ef-bda8-411e-9d60-79e9ed3b514b-kube-api-access-v84zx\") pod \"catalog-operator-68c6474976-8pr49\" (UID: \"ea36b9ef-bda8-411e-9d60-79e9ed3b514b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.555188 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.557295 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.560358 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.560569 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.060531376 +0000 UTC m=+143.434978986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.560674 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.561228 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.061216865 +0000 UTC m=+143.435664545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.571723 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjmq4\" (UniqueName: \"kubernetes.io/projected/547c672d-11fe-48d2-857e-27d7ac4e1fc9-kube-api-access-vjmq4\") pod \"packageserver-d55dfcdfc-j2rsb\" (UID: \"547c672d-11fe-48d2-857e-27d7ac4e1fc9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.579380 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.597255 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.616261 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.619323 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250918e3-cdc9-40cb-b390-6dbb5afe9d1f-config\") pod \"machine-api-operator-5694c8668f-cbp4z\" (UID: \"250918e3-cdc9-40cb-b390-6dbb5afe9d1f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.631389 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sbhv\" (UniqueName: \"kubernetes.io/projected/d21d2c22-5085-4712-a8d5-de95dc8a69b3-kube-api-access-9sbhv\") pod \"controller-manager-879f6c89f-5zqqn\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.637373 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4crh5\" (UniqueName: \"kubernetes.io/projected/3641c614-3691-442a-95e4-13582cfd16d2-kube-api-access-4crh5\") pod \"route-controller-manager-6576b87f9c-wmnch\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.638402 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.641865 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.653605 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.653948 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.661977 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.662021 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5m6g\" (UniqueName: \"kubernetes.io/projected/7da4e010-91cf-4920-ae5c-530bb20b2ba2-kube-api-access-p5m6g\") pod \"machine-approver-56656f9798-4kcwn\" (UID: \"7da4e010-91cf-4920-ae5c-530bb20b2ba2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.662733 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.162677063 +0000 UTC m=+143.537124673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.662801 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.663285 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.16326833 +0000 UTC m=+143.537715940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.686714 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.709027 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.721866 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.741751 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.758820 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.763641 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.763851 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.263821722 +0000 UTC m=+143.638269332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.764221 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.764835 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.26481785 +0000 UTC m=+143.639265460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.769122 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.866739 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.868045 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.368021678 +0000 UTC m=+143.742469288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.907651 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.962298 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" Jan 29 06:37:36 crc kubenswrapper[5017]: I0129 06:37:36.969294 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:36 crc kubenswrapper[5017]: E0129 06:37:36.969751 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.469736434 +0000 UTC m=+143.844184044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.070209 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.070464 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.57043677 +0000 UTC m=+143.944884380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.071047 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.071703 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.571683376 +0000 UTC m=+143.946130976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.088924 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs"] Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.154242 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4"] Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.165287 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt"] Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.173886 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.174299 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.674240115 +0000 UTC m=+144.048687725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.174658 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.177913 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.677898961 +0000 UTC m=+144.052346571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.266116 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" event={"ID":"7da4e010-91cf-4920-ae5c-530bb20b2ba2","Type":"ContainerStarted","Data":"3aecc05dc61659395d00267178c222044a2e1d0ce4c34748675a48853cf3dfa5"} Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.270269 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" event={"ID":"57af192f-0ed1-4738-b13e-a534669cdcaf","Type":"ContainerStarted","Data":"fda819e69ace4c8f5bd103f7c512d8284901b5385cd84c3ad6a74a1e3e5df071"} Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.270330 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" event={"ID":"57af192f-0ed1-4738-b13e-a534669cdcaf","Type":"ContainerStarted","Data":"81200d4b644952ce82a9cb4022c5de0cea7b48b49c35fab5086c06b597e48e1d"} Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.278446 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.278949 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.778913026 +0000 UTC m=+144.153360626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.281254 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9nv7f" event={"ID":"0e3a4b4e-acd1-426e-8d48-f2555ced71ec","Type":"ContainerStarted","Data":"f30103be3eb24137f408b9ece679838d51e938f20fa8e8cc952c149182765fec"} Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.282347 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9nv7f" Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.284776 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rlfl6" event={"ID":"6e2073b6-4205-42ab-8282-8bb749d7ef3d","Type":"ContainerStarted","Data":"169ff3f7eb94c73b1f9d7247149fc8bbc1ee4c5ac2ed44c35051923f7f6fcc11"} Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.286476 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z5brc" event={"ID":"23943ec6-beb6-4bef-b4b1-e5c840ab997b","Type":"ContainerStarted","Data":"def5d1b497268ffd07f193c337b524c3b9efb3a7e71c236747cf8ef8f99f38d5"} Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.298290 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" event={"ID":"64c950af-fa9a-4e12-bc92-05a7ec5864f8","Type":"ContainerStarted","Data":"1786faefda0c5092247e3a9c65ead7b079861edd70a596241e3a6cce5de5302e"} Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.305143 5017 patch_prober.go:28] interesting pod/downloads-7954f5f757-9nv7f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.305227 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9nv7f" podUID="0e3a4b4e-acd1-426e-8d48-f2555ced71ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.309091 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jnctv" event={"ID":"0f115c11-0ba8-4204-8f68-291e35b90b09","Type":"ContainerStarted","Data":"fb0a094ee762d985ecb80adb6179b0f0d9a688a031a4d2434d10f7145cc91fa6"} Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.388454 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.394347 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.894323104 +0000 UTC m=+144.268770714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.432662 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" event={"ID":"c4f1aae3-1ca6-43d6-9a83-be14081c28df","Type":"ContainerStarted","Data":"81140a3eae4256e6270e28f11a3b613267bdf07277279e92eadbef57f9fd5c57"} Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.446842 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fbqdg" Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.491507 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.494371 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:37.994340291 +0000 UTC m=+144.368788081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.594733 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.595632 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.095606833 +0000 UTC m=+144.470054623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.646640 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tqcv" podStartSLOduration=122.64661762 podStartE2EDuration="2m2.64661762s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:37.610368627 +0000 UTC m=+143.984816237" watchObservedRunningTime="2026-01-29 06:37:37.64661762 +0000 UTC m=+144.021065220" Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.690133 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fbqdg" podStartSLOduration=122.690106071 podStartE2EDuration="2m2.690106071s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:37.684783268 +0000 UTC m=+144.059230878" watchObservedRunningTime="2026-01-29 06:37:37.690106071 +0000 UTC m=+144.064553681" Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.695920 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.195889867 +0000 UTC m=+144.570337477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.695788 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.696722 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.697227 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.197215295 +0000 UTC m=+144.571662915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.798679 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.799444 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.299424894 +0000 UTC m=+144.673872504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.820275 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6tsc9" podStartSLOduration=122.820248623 podStartE2EDuration="2m2.820248623s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:37.818144284 +0000 UTC m=+144.192591914" watchObservedRunningTime="2026-01-29 06:37:37.820248623 +0000 UTC m=+144.194696243" Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.856179 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-92jgp" podStartSLOduration=122.856149046 podStartE2EDuration="2m2.856149046s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:37.854482659 +0000 UTC m=+144.228930279" watchObservedRunningTime="2026-01-29 06:37:37.856149046 +0000 UTC m=+144.230596656" Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.901742 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:37 crc kubenswrapper[5017]: E0129 06:37:37.902183 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.40216534 +0000 UTC m=+144.776612950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.974473 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9nv7f" podStartSLOduration=122.974453388 podStartE2EDuration="2m2.974453388s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:37.971549105 +0000 UTC m=+144.345996725" watchObservedRunningTime="2026-01-29 06:37:37.974453388 +0000 UTC m=+144.348900998" Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.982800 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr"] Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.984882 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8chq"] Jan 29 06:37:37 crc kubenswrapper[5017]: I0129 06:37:37.986840 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf"] Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.002776 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.003425 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.503404691 +0000 UTC m=+144.877852301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.024209 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nstbg"] Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.029359 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-plx6n"] Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.104772 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.105354 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.605336623 +0000 UTC m=+144.979784233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.227545 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.227999 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.72797829 +0000 UTC m=+145.102425910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.329331 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.330506 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.830491238 +0000 UTC m=+145.204938848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.435475 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.435975 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:38.935939871 +0000 UTC m=+145.310387481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.448047 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rlfl6" event={"ID":"6e2073b6-4205-42ab-8282-8bb749d7ef3d","Type":"ContainerStarted","Data":"b2a78cb341a634b2dbaaf38248b9b3711efce93d86be7acf34fe59398bad79bf"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.481190 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ddv7s" podStartSLOduration=123.481162571 podStartE2EDuration="2m3.481162571s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:38.478558956 +0000 UTC m=+144.853006566" watchObservedRunningTime="2026-01-29 06:37:38.481162571 +0000 UTC m=+144.855610181" Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.497340 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z5brc" event={"ID":"23943ec6-beb6-4bef-b4b1-e5c840ab997b","Type":"ContainerStarted","Data":"bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.510110 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" event={"ID":"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b","Type":"ContainerStarted","Data":"952eba734f33057dd178459401dd31a1f4583986ac49d044660d6aa52495bc24"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.536791 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.537940 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.037924744 +0000 UTC m=+145.412372354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.555381 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" event={"ID":"283f0328-3eac-4df0-8933-4e3f7d823fa9","Type":"ContainerStarted","Data":"a797ff5490a95d3dbc2464f81c681d2a29cd4ca91e81b3e4f4d5656147287616"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.560060 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" event={"ID":"7da4e010-91cf-4920-ae5c-530bb20b2ba2","Type":"ContainerStarted","Data":"ed3b665215382fd5affa1c160b5cd934d3af33e77c038e2d7d2225d5f6bed19a"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.570006 5017 generic.go:334] "Generic (PLEG): container finished" podID="e55b317a-9140-4b76-8119-a7de3f95dd34" containerID="8b8c437ed651115e231c40ed2e06431002cad12f6137250d02eff427c0202f27" exitCode=0 Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.570106 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" event={"ID":"e55b317a-9140-4b76-8119-a7de3f95dd34","Type":"ContainerDied","Data":"8b8c437ed651115e231c40ed2e06431002cad12f6137250d02eff427c0202f27"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.570138 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" event={"ID":"e55b317a-9140-4b76-8119-a7de3f95dd34","Type":"ContainerStarted","Data":"975e9a6931d2c08b735fc1015d56a86eebf4d879abb69b0144a3265fb2380fdf"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.586230 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" event={"ID":"2beee4dd-2230-42f4-a6a7-6d459f7564b5","Type":"ContainerStarted","Data":"dd7620006dc09b85616334776748ae8fb54fc08a3b7998207fd067025576c183"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.621448 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" event={"ID":"64c950af-fa9a-4e12-bc92-05a7ec5864f8","Type":"ContainerStarted","Data":"09478c1718c46af002c0ddbe82726dd80d9fb98310c5aacf5d86e3131d8136df"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.642669 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.644134 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.144114728 +0000 UTC m=+145.518562338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.657060 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.665257 5017 patch_prober.go:28] interesting pod/router-default-5444994796-rlfl6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:37:38 crc kubenswrapper[5017]: [-]has-synced failed: reason withheld Jan 29 06:37:38 crc kubenswrapper[5017]: [+]process-running ok Jan 29 06:37:38 crc kubenswrapper[5017]: healthz check failed Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.665335 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rlfl6" podUID="6e2073b6-4205-42ab-8282-8bb749d7ef3d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.674492 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-z5brc" podStartSLOduration=123.67447276 podStartE2EDuration="2m3.67447276s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:38.673974676 +0000 UTC m=+145.048422286" watchObservedRunningTime="2026-01-29 06:37:38.67447276 +0000 UTC m=+145.048920360" Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.679078 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" event={"ID":"c4f1aae3-1ca6-43d6-9a83-be14081c28df","Type":"ContainerStarted","Data":"316b8a4669243ca7754372d65cfc963c01d8d82f8146514b5b17bb3cba01b2de"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.687097 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7"] Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.698673 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" event={"ID":"112cd557-09e1-4599-ba7c-42a24407956f","Type":"ContainerStarted","Data":"c33fa973b75676b01e6120f0ca68f31e46a9b68bf75c1fc34f106fe300e14542"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.736654 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" event={"ID":"3c5c683b-3a25-4237-9b2c-e2ff822e0080","Type":"ContainerStarted","Data":"727d5fc016af854efa32412ab3725ae5ed9ffc71119f67f3d59d75faa4ce231e"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.736873 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rlfl6" podStartSLOduration=123.736844175 podStartE2EDuration="2m3.736844175s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:38.736846025 +0000 UTC m=+145.111293635" watchObservedRunningTime="2026-01-29 06:37:38.736844175 +0000 UTC m=+145.111291785" Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.747153 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" event={"ID":"af26b99f-54ed-4730-91b2-a13130823631","Type":"ContainerStarted","Data":"5eb8a7448becd3ecfaa82c5ff630d8512c7968167bd917aab1d8d5bf69024cff"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.747228 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" event={"ID":"af26b99f-54ed-4730-91b2-a13130823631","Type":"ContainerStarted","Data":"6d36e610dfbe3ed09815d49d7cb7c455821ca777a01b7233ea38324a3d702ecc"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.748771 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.754122 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.757482 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.257465048 +0000 UTC m=+145.631912648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.768552 5017 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-l5qpt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.768694 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" podUID="af26b99f-54ed-4730-91b2-a13130823631" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.795008 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt"] Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.809502 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv"] Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.817827 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nstbg" event={"ID":"f3ba182d-28e7-4be4-8ae1-68d140e5e285","Type":"ContainerStarted","Data":"014462db3904cf6fce263572874d87893adb656cac6d2cae2ecba5568d4f7625"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.833773 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k9gnr"] Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.856365 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jnctv" event={"ID":"0f115c11-0ba8-4204-8f68-291e35b90b09","Type":"ContainerStarted","Data":"903437c91f565024d12fdc2c1dbe779bd0f954aeb850d8c3768eeda73bf354a7"} Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.856569 5017 patch_prober.go:28] interesting pod/downloads-7954f5f757-9nv7f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.856667 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9nv7f" podUID="0e3a4b4e-acd1-426e-8d48-f2555ced71ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.856678 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.356657901 +0000 UTC m=+145.731105511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.856588 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.857205 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.857995 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.357930526 +0000 UTC m=+145.732378136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.865033 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rbhd6" podStartSLOduration=123.86500151 podStartE2EDuration="2m3.86500151s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:38.841042641 +0000 UTC m=+145.215490251" watchObservedRunningTime="2026-01-29 06:37:38.86500151 +0000 UTC m=+145.239449120" Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.888563 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dhb9x"] Jan 29 06:37:38 crc kubenswrapper[5017]: W0129 06:37:38.941337 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod927c767b_89e3_46c0_b5a1_02edc60a959c.slice/crio-8f0f9f10747990a8a57b6b57c9d595c6f6de510f06c20d5b604d53d518965692 WatchSource:0}: Error finding container 8f0f9f10747990a8a57b6b57c9d595c6f6de510f06c20d5b604d53d518965692: Status 404 returned error can't find the container with id 8f0f9f10747990a8a57b6b57c9d595c6f6de510f06c20d5b604d53d518965692 Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.945214 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" podStartSLOduration=123.945189227 podStartE2EDuration="2m3.945189227s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:38.918654773 +0000 UTC m=+145.293102393" watchObservedRunningTime="2026-01-29 06:37:38.945189227 +0000 UTC m=+145.319636837" Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.948813 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs"] Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.958641 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:38 crc kubenswrapper[5017]: E0129 06:37:38.961805 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.461773394 +0000 UTC m=+145.836221004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:38 crc kubenswrapper[5017]: I0129 06:37:38.987295 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.001665 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cdswz"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.001743 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.002769 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4pmt6"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.006621 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fhx55"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.015993 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.016553 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" podStartSLOduration=124.016541109 podStartE2EDuration="2m4.016541109s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:38.961694441 +0000 UTC m=+145.336142051" watchObservedRunningTime="2026-01-29 06:37:39.016541109 +0000 UTC m=+145.390988719" Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.016723 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.027939 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cbp4z"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.033732 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.035182 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jnctv" podStartSLOduration=6.035159804 podStartE2EDuration="6.035159804s" podCreationTimestamp="2026-01-29 06:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:38.987795362 +0000 UTC m=+145.362242972" watchObservedRunningTime="2026-01-29 06:37:39.035159804 +0000 UTC m=+145.409607414" Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.061353 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.062209 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.562187911 +0000 UTC m=+145.936635521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.093608 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zqqn"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.121907 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.144365 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.144408 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.165096 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.165540 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.665497152 +0000 UTC m=+146.039944762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.175199 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pws6m"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.243027 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9"] Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.267065 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.267509 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.767492476 +0000 UTC m=+146.141940076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.373324 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.374203 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.874183294 +0000 UTC m=+146.248630904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.476029 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.476420 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:39.976403064 +0000 UTC m=+146.350850674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.577645 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.581143 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.077939694 +0000 UTC m=+146.452387304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.611503 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.618121 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.118098038 +0000 UTC m=+146.492545648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.662220 5017 patch_prober.go:28] interesting pod/router-default-5444994796-rlfl6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:37:39 crc kubenswrapper[5017]: [-]has-synced failed: reason withheld Jan 29 06:37:39 crc kubenswrapper[5017]: [+]process-running ok Jan 29 06:37:39 crc kubenswrapper[5017]: healthz check failed Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.662298 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rlfl6" podUID="6e2073b6-4205-42ab-8282-8bb749d7ef3d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.729033 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.729517 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.229490532 +0000 UTC m=+146.603938142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.830475 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.831585 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.331568518 +0000 UTC m=+146.706016128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.881910 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" event={"ID":"112cd557-09e1-4599-ba7c-42a24407956f","Type":"ContainerStarted","Data":"4b8627e9b59618768e533b5b8adbeb59bfab048b81f52eeb103683c94be231dd"} Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.884234 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c5c683b-3a25-4237-9b2c-e2ff822e0080" containerID="7eeed032eec4f9e85c7ad23d36bbabacd77de445234b09afaa2e42b163ca273a" exitCode=0 Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.884300 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" event={"ID":"3c5c683b-3a25-4237-9b2c-e2ff822e0080","Type":"ContainerDied","Data":"7eeed032eec4f9e85c7ad23d36bbabacd77de445234b09afaa2e42b163ca273a"} Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.895356 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" event={"ID":"547c672d-11fe-48d2-857e-27d7ac4e1fc9","Type":"ContainerStarted","Data":"3ed0f5d7195c8a6eb1749c675e2893dc7d5e875c82c33ee7733caca8d8d153f6"} Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.916485 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" event={"ID":"f1a99fb5-b091-4517-8519-59e68cd2366e","Type":"ContainerStarted","Data":"bf11ce27a6a4cf8b69a3fb0b4a44d0c9bf6e77635109796225c52d5f4d6c1a46"} Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.932187 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:39 crc kubenswrapper[5017]: E0129 06:37:39.934903 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.434871059 +0000 UTC m=+146.809318659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.959788 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" event={"ID":"9ce02582-da6e-4ba4-8c7b-a1e1132eff04","Type":"ContainerStarted","Data":"b2eee4a9c9bbe24e80fe70c1248d642620a6517df4137b4587920be52d8ac536"} Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.963980 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" event={"ID":"3641c614-3691-442a-95e4-13582cfd16d2","Type":"ContainerStarted","Data":"6215bef46e327fd950b2ee707c27eaed5f5060258802d409b3e0e891d9b98f28"} Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.964042 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" event={"ID":"3641c614-3691-442a-95e4-13582cfd16d2","Type":"ContainerStarted","Data":"620d8a4577172d8cf653e70941a47d1724fbbbd222dd5fdb960a02ed3731da5b"} Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.964185 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.979719 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" event={"ID":"e55b317a-9140-4b76-8119-a7de3f95dd34","Type":"ContainerStarted","Data":"080f7fc350989bd36a6d758a44b705a10ddb42a73c804948a8fcf03145fa8f78"} Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.979910 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.987947 5017 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wmnch container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 06:37:39 crc kubenswrapper[5017]: I0129 06:37:39.988021 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" podUID="3641c614-3691-442a-95e4-13582cfd16d2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.008648 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" podStartSLOduration=124.00862668 podStartE2EDuration="2m4.00862668s" podCreationTimestamp="2026-01-29 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.0075683 +0000 UTC m=+146.382015910" watchObservedRunningTime="2026-01-29 06:37:40.00862668 +0000 UTC m=+146.383074290" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.022044 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.022369 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.027048 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nstbg" event={"ID":"f3ba182d-28e7-4be4-8ae1-68d140e5e285","Type":"ContainerStarted","Data":"c65b0b70e6cc8beead832da222f4ce314ec34b2aa9d6bce23e4cfe3b9b7c5a18"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.042157 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.042574 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.542553395 +0000 UTC m=+146.917001205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.043152 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" event={"ID":"16ace874-46a3-4668-a109-845a7d4a75e7","Type":"ContainerStarted","Data":"e0370f2bb90e6f66fb8b4a2c5a57fcf28dc4fdb8c2d60896840bc6c6e5cde8db"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.049710 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" podStartSLOduration=125.049678461 podStartE2EDuration="2m5.049678461s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.043328538 +0000 UTC m=+146.417776158" watchObservedRunningTime="2026-01-29 06:37:40.049678461 +0000 UTC m=+146.424126071" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.065878 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" event={"ID":"250918e3-cdc9-40cb-b390-6dbb5afe9d1f","Type":"ContainerStarted","Data":"cc15ce561e04603bf534b408581c1d9bfabeb1c1bad114ab2d0e24f6787d06a3"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.077943 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" event={"ID":"5db2271c-c63c-4066-b91d-3f132768eb09","Type":"ContainerStarted","Data":"f203983c5be002f38eac40200912b96ed6c68530110ccde284e6a89be7e68390"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.085782 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nstbg" podStartSLOduration=7.085761058 podStartE2EDuration="7.085761058s" podCreationTimestamp="2026-01-29 06:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.083454602 +0000 UTC m=+146.457902212" watchObservedRunningTime="2026-01-29 06:37:40.085761058 +0000 UTC m=+146.460208668" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.113071 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" event={"ID":"5ab0da1e-4133-488b-9472-83bde1f3bd25","Type":"ContainerStarted","Data":"f1e28156416b41c296fde0c1be650c583bd1db8a0684a5cd728317da78351b36"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.113509 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" podStartSLOduration=125.113485346 podStartE2EDuration="2m5.113485346s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.112404874 +0000 UTC m=+146.486852494" watchObservedRunningTime="2026-01-29 06:37:40.113485346 +0000 UTC m=+146.487932956" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.127461 5017 csr.go:261] certificate signing request csr-r99xw is approved, waiting to be issued Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.127518 5017 patch_prober.go:28] interesting pod/apiserver-76f77b778f-pd9qw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]log ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]etcd ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/max-in-flight-filter ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 06:37:40 crc kubenswrapper[5017]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 06:37:40 crc kubenswrapper[5017]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/openshift.io-startinformers ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 06:37:40 crc kubenswrapper[5017]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 06:37:40 crc kubenswrapper[5017]: livez check failed Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.127588 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" podUID="c4f1aae3-1ca6-43d6-9a83-be14081c28df" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.131947 5017 csr.go:257] certificate signing request csr-r99xw is issued Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.135627 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" event={"ID":"d21d2c22-5085-4712-a8d5-de95dc8a69b3","Type":"ContainerStarted","Data":"a70dfab99b595854774ec19d1f688f6433ce3743bf735f78cc3b3c8f33435ab6"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.140617 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" event={"ID":"927c767b-89e3-46c0-b5a1-02edc60a959c","Type":"ContainerStarted","Data":"02b4bda9446da92e1f50598464d3b76229df8bfee092213e1c83cc731768a807"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.140680 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" event={"ID":"927c767b-89e3-46c0-b5a1-02edc60a959c","Type":"ContainerStarted","Data":"8f0f9f10747990a8a57b6b57c9d595c6f6de510f06c20d5b604d53d518965692"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.145523 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" podStartSLOduration=125.145496556 podStartE2EDuration="2m5.145496556s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.142083249 +0000 UTC m=+146.516530849" watchObservedRunningTime="2026-01-29 06:37:40.145496556 +0000 UTC m=+146.519944166" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.147017 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.148452 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.648432371 +0000 UTC m=+147.022879981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.154526 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" event={"ID":"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81","Type":"ContainerStarted","Data":"f2f89508122e2fcb48241ca8b0251d71128f24d5d561f89148b40e603a333799"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.190772 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhb9x" podStartSLOduration=125.190734777 podStartE2EDuration="2m5.190734777s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.171833384 +0000 UTC m=+146.546281004" watchObservedRunningTime="2026-01-29 06:37:40.190734777 +0000 UTC m=+146.565182387" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.195334 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" event={"ID":"688f2e96-86c1-45db-a699-28269469f6f0","Type":"ContainerStarted","Data":"9bdd8757be931d64a943969c701b0fd2663cdcf8919588b9486be2ce792f878c"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.228451 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" event={"ID":"283f0328-3eac-4df0-8933-4e3f7d823fa9","Type":"ContainerStarted","Data":"fd3737dcd321f637bd61ce203541b7752121ef104d7d4cd35dc4b87d583cf6c7"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.244246 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" podStartSLOduration=125.244221646 podStartE2EDuration="2m5.244221646s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.235472094 +0000 UTC m=+146.609919704" watchObservedRunningTime="2026-01-29 06:37:40.244221646 +0000 UTC m=+146.618669256" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.248540 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.248917 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" event={"ID":"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd","Type":"ContainerStarted","Data":"0c056a47fa1deddab5c9daec1943b355312b897dbe96098b24b6b025e768af50"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.248990 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" event={"ID":"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd","Type":"ContainerStarted","Data":"d26a892650080a2a8632131fcdda769823d9b01c3db8aec21738ce22e233c2af"} Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.250148 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.750130866 +0000 UTC m=+147.124578566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.269667 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" event={"ID":"baaa9247-4265-4db5-a2c8-eaa993fd0971","Type":"ContainerStarted","Data":"678758a04279a4b1bf96877da035ed0e8e0abff126ca3cf69eb2be6d496b5f9d"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.336180 5017 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-f8chq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.336276 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" podUID="bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.341364 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" event={"ID":"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b","Type":"ContainerStarted","Data":"5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.341469 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" event={"ID":"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e","Type":"ContainerStarted","Data":"c0b1761800414aa48ed7f86fade4fdfa4a56701f011a1788e821f89bfd9bc12f"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.341493 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.350448 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.351871 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.85182836 +0000 UTC m=+147.226276110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.355693 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" podStartSLOduration=124.355675761 podStartE2EDuration="2m4.355675761s" podCreationTimestamp="2026-01-29 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.355319781 +0000 UTC m=+146.729767391" watchObservedRunningTime="2026-01-29 06:37:40.355675761 +0000 UTC m=+146.730123371" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.357594 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhwr" podStartSLOduration=124.357587425 podStartE2EDuration="2m4.357587425s" podCreationTimestamp="2026-01-29 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.270079869 +0000 UTC m=+146.644527479" watchObservedRunningTime="2026-01-29 06:37:40.357587425 +0000 UTC m=+146.732035035" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.390486 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" event={"ID":"81866aa9-0a71-4fb1-8354-2cf5089e9e19","Type":"ContainerStarted","Data":"dee5dc9b8dd5456b3eeed4dc662e4cbd379a32ab8d759d75ed1b1b14fba42e69"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.390534 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" event={"ID":"81866aa9-0a71-4fb1-8354-2cf5089e9e19","Type":"ContainerStarted","Data":"ba7dfdf696592eb174d85287d0ae0f975de8c47026f7aa24194a0d78e27f37d9"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.406404 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" event={"ID":"ea36b9ef-bda8-411e-9d60-79e9ed3b514b","Type":"ContainerStarted","Data":"6bf8fcca7936c2f4541a19b7fdc46e6bca21347a6b655ee58cd77d3603529008"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.458429 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.460315 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" event={"ID":"6bd9b034-cfec-4194-9b45-318ed8625994","Type":"ContainerStarted","Data":"273b28542882b904675c4621600a710e72517df79a6cdad8931718b5c28d5edb"} Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.476057 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:40.976033262 +0000 UTC m=+147.350480872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.532570 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" event={"ID":"5df97138-ff6c-44b4-ac93-9136235d5888","Type":"ContainerStarted","Data":"835a80c392ff5ba160e72d035f65ff9a0884f11507736eb1ba21f10c0cc2a2f6"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.532638 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" event={"ID":"5df97138-ff6c-44b4-ac93-9136235d5888","Type":"ContainerStarted","Data":"de55bbc30a8a24fc1ebab4f8d4adb14677077e0e9a6428a5f1b98392a775f5ed"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.543223 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" event={"ID":"7da4e010-91cf-4920-ae5c-530bb20b2ba2","Type":"ContainerStarted","Data":"b86e94f586e4abdc85c7a3ae03d026ce2e7dbd817e17ef6a919d8ffbc7e91f36"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.560720 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.562542 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.06251317 +0000 UTC m=+147.436960780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.569341 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-k9gnr" podStartSLOduration=124.569323595 podStartE2EDuration="2m4.569323595s" podCreationTimestamp="2026-01-29 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.423726658 +0000 UTC m=+146.798174268" watchObservedRunningTime="2026-01-29 06:37:40.569323595 +0000 UTC m=+146.943771205" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.570154 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4kcwn" podStartSLOduration=126.570148739 podStartE2EDuration="2m6.570148739s" podCreationTimestamp="2026-01-29 06:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.568265565 +0000 UTC m=+146.942713165" watchObservedRunningTime="2026-01-29 06:37:40.570148739 +0000 UTC m=+146.944596349" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.594214 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" event={"ID":"2beee4dd-2230-42f4-a6a7-6d459f7564b5","Type":"ContainerStarted","Data":"767af4d1ff67e579c2932199cb625cdab5b574ced1da892f821dfd00cabecdbb"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.594275 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" event={"ID":"2beee4dd-2230-42f4-a6a7-6d459f7564b5","Type":"ContainerStarted","Data":"63b7815170276bbc2df038cd34be59b6be17844406c24ca773d57422ed560777"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.602846 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fhx55" event={"ID":"2495db50-8f94-4c1e-a23c-c16a3ee22bba","Type":"ContainerStarted","Data":"e0959e64ee97054b9c673e090c72e8a5420f0c059bf4f57439f824ba72ca5aa2"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.606984 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" event={"ID":"c94822af-25f0-4380-9193-1554ff518daf","Type":"ContainerStarted","Data":"86c6263db0bc92eb9f209bc26cf542b8747593d0a1c89b2144efd9d810dba4d3"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.607815 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" event={"ID":"c94822af-25f0-4380-9193-1554ff518daf","Type":"ContainerStarted","Data":"157f1e2bcb2f8f0961fea7617973d722d9d371e7aef4400d7a7ac25fc13b5439"} Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.610931 5017 patch_prober.go:28] interesting pod/downloads-7954f5f757-9nv7f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.611010 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9nv7f" podUID="0e3a4b4e-acd1-426e-8d48-f2555ced71ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.618336 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-44ss4" podStartSLOduration=126.618310864 podStartE2EDuration="2m6.618310864s" podCreationTimestamp="2026-01-29 06:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:40.615946466 +0000 UTC m=+146.990394066" watchObservedRunningTime="2026-01-29 06:37:40.618310864 +0000 UTC m=+146.992758474" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.621881 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5qpt" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.663078 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.663543 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.163527735 +0000 UTC m=+147.537975335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.665307 5017 patch_prober.go:28] interesting pod/router-default-5444994796-rlfl6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:37:40 crc kubenswrapper[5017]: [-]has-synced failed: reason withheld Jan 29 06:37:40 crc kubenswrapper[5017]: [+]process-running ok Jan 29 06:37:40 crc kubenswrapper[5017]: healthz check failed Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.665400 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rlfl6" podUID="6e2073b6-4205-42ab-8282-8bb749d7ef3d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.768135 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.774405 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.274375172 +0000 UTC m=+147.648822982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.875312 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.876174 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.37616096 +0000 UTC m=+147.750608570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:40 crc kubenswrapper[5017]: I0129 06:37:40.976632 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:40 crc kubenswrapper[5017]: E0129 06:37:40.977036 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.47701813 +0000 UTC m=+147.851465740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.082990 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.083488 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.583466502 +0000 UTC m=+147.957914102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.133360 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 06:32:40 +0000 UTC, rotation deadline is 2026-10-23 01:38:15.002391636 +0000 UTC Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.133850 5017 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6403h0m33.868545185s for next certificate rotation Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.184142 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.184364 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.684330272 +0000 UTC m=+148.058777882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.184425 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.185034 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.685027362 +0000 UTC m=+148.059474972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.285107 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.285484 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.78544835 +0000 UTC m=+148.159896020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.386527 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.387018 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.88694329 +0000 UTC m=+148.261390960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.487659 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.487867 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.987813941 +0000 UTC m=+148.362261551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.488026 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.488533 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:41.9885103 +0000 UTC m=+148.362957970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.589268 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.589418 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.089394411 +0000 UTC m=+148.463842021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.589536 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.589873 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.089864346 +0000 UTC m=+148.464311956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.614334 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" event={"ID":"3c5c683b-3a25-4237-9b2c-e2ff822e0080","Type":"ContainerStarted","Data":"afc01569fcf7a0a9e5e349d12d3560ccce39910bad333f20a26e1aec912c763a"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.616220 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" event={"ID":"c666e0f9-29fc-4765-8d2c-0d7b5e1545fd","Type":"ContainerStarted","Data":"c64149752985671ac93121eec063708a2b20d4023fbc8352f634d1ebb7237681"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.618144 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" event={"ID":"250918e3-cdc9-40cb-b390-6dbb5afe9d1f","Type":"ContainerStarted","Data":"7685a28712f0e24676678d9e0ddae485656b3ad0e17564dc0ffa5cda126a726e"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.618221 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" event={"ID":"250918e3-cdc9-40cb-b390-6dbb5afe9d1f","Type":"ContainerStarted","Data":"f745f2ed69e6154b947f991cb49bd1e211f279b0d6e7ab629e73938dc7a2ff1f"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.624259 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-777x9" event={"ID":"5db2271c-c63c-4066-b91d-3f132768eb09","Type":"ContainerStarted","Data":"042762f58b627121d4c8ea499607428f317981d179b1030439aa0ed02c6c29ec"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.628324 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j7bhs" event={"ID":"688f2e96-86c1-45db-a699-28269469f6f0","Type":"ContainerStarted","Data":"886199b31417ab0d3c6efa10a1f854964c504f4641911c9883d09cc3091c30d9"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.631312 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" event={"ID":"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e","Type":"ContainerStarted","Data":"222f60d0ab5105c4428d853d7b9c05abd56bf70465004e58e5a328e648702c30"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.636665 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" event={"ID":"c94822af-25f0-4380-9193-1554ff518daf","Type":"ContainerStarted","Data":"e90d7bcd0696de59d3a0e641a8a4ec167facf472d7effe925fa6cf64a12e1638"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.636873 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.639113 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" event={"ID":"ea36b9ef-bda8-411e-9d60-79e9ed3b514b","Type":"ContainerStarted","Data":"3cadace4ca398dcaf5ffe42897053ea6561775eb2110e7a2832235fee594f350"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.639705 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.641284 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" event={"ID":"16ace874-46a3-4668-a109-845a7d4a75e7","Type":"ContainerStarted","Data":"63766d6344521394c6610b8a6a8f9816236a77256128fc6a7132cd6e39a58130"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.641988 5017 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8pr49 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.642035 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" podUID="ea36b9ef-bda8-411e-9d60-79e9ed3b514b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.643578 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" podStartSLOduration=125.64356399 podStartE2EDuration="2m5.64356399s" podCreationTimestamp="2026-01-29 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:41.640375308 +0000 UTC m=+148.014822928" watchObservedRunningTime="2026-01-29 06:37:41.64356399 +0000 UTC m=+148.018011630" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.647789 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" event={"ID":"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81","Type":"ContainerStarted","Data":"7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.647993 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.650365 5017 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-pws6m container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" start-of-body= Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.650442 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" podUID="2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.651793 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" event={"ID":"d21d2c22-5085-4712-a8d5-de95dc8a69b3","Type":"ContainerStarted","Data":"e7e400cbd3423e76eaf410a1a7d86bd7bb8a9f1ce8405d781ab090852b44223a"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.652753 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.654685 5017 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5zqqn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.654765 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" podUID="d21d2c22-5085-4712-a8d5-de95dc8a69b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.655706 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" event={"ID":"112cd557-09e1-4599-ba7c-42a24407956f","Type":"ContainerStarted","Data":"7318f9de88967d0d60d762ca7ea319ea6df1fdbd749383d94074a3bec51f00f9"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.661216 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" event={"ID":"baaa9247-4265-4db5-a2c8-eaa993fd0971","Type":"ContainerStarted","Data":"b49070b6530e597c0a354585979b820c50504aac9672db39ba1361473c37cd4a"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.669830 5017 patch_prober.go:28] interesting pod/router-default-5444994796-rlfl6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:37:41 crc kubenswrapper[5017]: [-]has-synced failed: reason withheld Jan 29 06:37:41 crc kubenswrapper[5017]: [+]process-running ok Jan 29 06:37:41 crc kubenswrapper[5017]: healthz check failed Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.669920 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rlfl6" podUID="6e2073b6-4205-42ab-8282-8bb749d7ef3d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.672036 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" podStartSLOduration=125.672003387 podStartE2EDuration="2m5.672003387s" podCreationTimestamp="2026-01-29 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:41.670251237 +0000 UTC m=+148.044698857" watchObservedRunningTime="2026-01-29 06:37:41.672003387 +0000 UTC m=+148.046451007" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.672207 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" event={"ID":"9ce02582-da6e-4ba4-8c7b-a1e1132eff04","Type":"ContainerStarted","Data":"8e04f628b77f9169d1fffbe84c599cb2e7606d6a726694aedcccf9fc2e5a1eff"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.672266 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" event={"ID":"9ce02582-da6e-4ba4-8c7b-a1e1132eff04","Type":"ContainerStarted","Data":"7a8482fce57aadf67e50a1bfd037be529df9b13578664e4496cbbbbbbf86e3ff"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.675831 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" event={"ID":"547c672d-11fe-48d2-857e-27d7ac4e1fc9","Type":"ContainerStarted","Data":"1e86631476bac774bc31294c96dd99ae30fa219fbcbff6f0988d1b697474c8cb"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.676796 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.687761 5017 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j2rsb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.687864 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" podUID="547c672d-11fe-48d2-857e-27d7ac4e1fc9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.691806 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.694380 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.19435901 +0000 UTC m=+148.568806620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.699848 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kvgh9" event={"ID":"5ab0da1e-4133-488b-9472-83bde1f3bd25","Type":"ContainerStarted","Data":"a1b8018179ef6b33c4c09e2845ec2acea8e0cfee475f7d48f7b30b136d0dc122"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.709886 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.710295 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.210278619 +0000 UTC m=+148.584726229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.719892 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" podStartSLOduration=126.719857303 podStartE2EDuration="2m6.719857303s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:41.712312947 +0000 UTC m=+148.086760557" watchObservedRunningTime="2026-01-29 06:37:41.719857303 +0000 UTC m=+148.094304913" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.726814 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fhx55" event={"ID":"2495db50-8f94-4c1e-a23c-c16a3ee22bba","Type":"ContainerStarted","Data":"c41cbcaf0e20721d3207406305effe707d15b2addfc7ac08ab6000494ed6c10f"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.726881 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fhx55" event={"ID":"2495db50-8f94-4c1e-a23c-c16a3ee22bba","Type":"ContainerStarted","Data":"13b597b73c716408f653e4da9ca4f2460ccafbc61cde10249a3b1e16cb43248e"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.727585 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.753639 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" event={"ID":"f1a99fb5-b091-4517-8519-59e68cd2366e","Type":"ContainerStarted","Data":"954c69b91232a4b5f0349185e6d35725a636033a3be7c1d94c04052c76bde543"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.758587 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" podStartSLOduration=126.758549426 podStartE2EDuration="2m6.758549426s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:41.738576932 +0000 UTC m=+148.113024542" watchObservedRunningTime="2026-01-29 06:37:41.758549426 +0000 UTC m=+148.132997036" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.787195 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" event={"ID":"5df97138-ff6c-44b4-ac93-9136235d5888","Type":"ContainerStarted","Data":"9ae25000a612c9182754c42dcb22e536e1aa078ea1d479b7b3457b18df317a28"} Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.810450 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.816037 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.819024 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.318994465 +0000 UTC m=+148.693442075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.844919 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cbp4z" podStartSLOduration=126.844874509 podStartE2EDuration="2m6.844874509s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:41.840782522 +0000 UTC m=+148.215230142" watchObservedRunningTime="2026-01-29 06:37:41.844874509 +0000 UTC m=+148.219322119" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.867308 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.886970 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lthxv" podStartSLOduration=126.886929099 podStartE2EDuration="2m6.886929099s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:41.884712955 +0000 UTC m=+148.259160565" watchObservedRunningTime="2026-01-29 06:37:41.886929099 +0000 UTC m=+148.261376709" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.916146 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ksz9z" podStartSLOduration=126.916128569 podStartE2EDuration="2m6.916128569s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:41.913308288 +0000 UTC m=+148.287755898" watchObservedRunningTime="2026-01-29 06:37:41.916128569 +0000 UTC m=+148.290576179" Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.919144 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:41 crc kubenswrapper[5017]: E0129 06:37:41.919624 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.419608108 +0000 UTC m=+148.794055708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:41 crc kubenswrapper[5017]: I0129 06:37:41.944042 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tl7pb" podStartSLOduration=126.944019091 podStartE2EDuration="2m6.944019091s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:41.942698173 +0000 UTC m=+148.317145783" watchObservedRunningTime="2026-01-29 06:37:41.944019091 +0000 UTC m=+148.318466701" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.021781 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.022043 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.521997463 +0000 UTC m=+148.896445073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.022234 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.022623 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.522613601 +0000 UTC m=+148.897061211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.070904 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-cdswz" podStartSLOduration=127.070881499 podStartE2EDuration="2m7.070881499s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:42.069801778 +0000 UTC m=+148.444249388" watchObservedRunningTime="2026-01-29 06:37:42.070881499 +0000 UTC m=+148.445329109" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.123149 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.123369 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.623336148 +0000 UTC m=+148.997783758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.123550 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.123583 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.123665 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.124866 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.127403 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.627390445 +0000 UTC m=+149.001838055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.132104 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.144420 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" podStartSLOduration=126.144398044 podStartE2EDuration="2m6.144398044s" podCreationTimestamp="2026-01-29 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:42.099929094 +0000 UTC m=+148.474376704" watchObservedRunningTime="2026-01-29 06:37:42.144398044 +0000 UTC m=+148.518845654" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.147973 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.158234 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.181777 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fhx55" podStartSLOduration=9.181755617 podStartE2EDuration="9.181755617s" podCreationTimestamp="2026-01-29 06:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:42.146564445 +0000 UTC m=+148.521012055" watchObservedRunningTime="2026-01-29 06:37:42.181755617 +0000 UTC m=+148.556203227" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.203047 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n8qjs" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.226485 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.226772 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.229457 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.729430328 +0000 UTC m=+149.103877938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.224929 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" podStartSLOduration=127.224902938 podStartE2EDuration="2m7.224902938s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:42.222022516 +0000 UTC m=+148.596470126" watchObservedRunningTime="2026-01-29 06:37:42.224902938 +0000 UTC m=+148.599350548" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.235343 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-plx6n" podStartSLOduration=127.235309138 podStartE2EDuration="2m7.235309138s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:42.182305164 +0000 UTC m=+148.556752774" watchObservedRunningTime="2026-01-29 06:37:42.235309138 +0000 UTC m=+148.609756748" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.238586 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.300724 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k92pt" podStartSLOduration=127.300702658 podStartE2EDuration="2m7.300702658s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:42.262416387 +0000 UTC m=+148.636863997" watchObservedRunningTime="2026-01-29 06:37:42.300702658 +0000 UTC m=+148.675150268" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.300840 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z6lr7" podStartSLOduration=127.300835642 podStartE2EDuration="2m7.300835642s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:42.298624889 +0000 UTC m=+148.673072499" watchObservedRunningTime="2026-01-29 06:37:42.300835642 +0000 UTC m=+148.675283262" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.331371 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.331730 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.83171755 +0000 UTC m=+149.206165160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.347515 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.356316 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.361683 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.378466 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" podStartSLOduration=127.378442964 podStartE2EDuration="2m7.378442964s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:42.329832926 +0000 UTC m=+148.704280536" watchObservedRunningTime="2026-01-29 06:37:42.378442964 +0000 UTC m=+148.752890574" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.433884 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.434344 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:42.934323912 +0000 UTC m=+149.308771522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.535921 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.536419 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.036395287 +0000 UTC m=+149.410842897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.637821 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.638545 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.138519584 +0000 UTC m=+149.512967204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.694047 5017 patch_prober.go:28] interesting pod/router-default-5444994796-rlfl6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:37:42 crc kubenswrapper[5017]: [-]has-synced failed: reason withheld Jan 29 06:37:42 crc kubenswrapper[5017]: [+]process-running ok Jan 29 06:37:42 crc kubenswrapper[5017]: healthz check failed Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.694181 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rlfl6" podUID="6e2073b6-4205-42ab-8282-8bb749d7ef3d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.741838 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.742320 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.242305019 +0000 UTC m=+149.616752629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.842552 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.843141 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.343123288 +0000 UTC m=+149.717570898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.843831 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" event={"ID":"6bd9b034-cfec-4194-9b45-318ed8625994","Type":"ContainerStarted","Data":"c5e04c2f06593db730052e95ef7383192652824d303896d6e30766f571878eb0"} Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.872336 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8pr49" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.881580 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:37:42 crc kubenswrapper[5017]: I0129 06:37:42.953340 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:42 crc kubenswrapper[5017]: E0129 06:37:42.953726 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.453710419 +0000 UTC m=+149.828158029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.055671 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.056143 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.556119324 +0000 UTC m=+149.930566934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.160743 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.161184 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.661169635 +0000 UTC m=+150.035617245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.264358 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.264790 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.764770665 +0000 UTC m=+150.139218275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.265162 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.265464 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.765457764 +0000 UTC m=+150.139905374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.365931 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.366336 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.866315264 +0000 UTC m=+150.240762874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.468505 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.469239 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:43.969216814 +0000 UTC m=+150.343664424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.569727 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.569939 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.06990286 +0000 UTC m=+150.444350470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.570082 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.570561 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.070552478 +0000 UTC m=+150.445000098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.583531 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.659695 5017 patch_prober.go:28] interesting pod/router-default-5444994796-rlfl6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:37:43 crc kubenswrapper[5017]: [-]has-synced failed: reason withheld Jan 29 06:37:43 crc kubenswrapper[5017]: [+]process-running ok Jan 29 06:37:43 crc kubenswrapper[5017]: healthz check failed Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.659748 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rlfl6" podUID="6e2073b6-4205-42ab-8282-8bb749d7ef3d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.671782 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.672173 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.172151571 +0000 UTC m=+150.546599181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.690843 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j2rsb" Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.773336 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.774217 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.274199516 +0000 UTC m=+150.648647126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.828138 5017 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.857549 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d63a7f721ca8c797e46b8eb63658c5799df06415e4a973754c9760d39fcff70a"} Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.857637 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bb2f1552103d8de779f6179f41ce20d8f902fe08d139372cb78a40adb98d0768"} Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.865842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" event={"ID":"6bd9b034-cfec-4194-9b45-318ed8625994","Type":"ContainerStarted","Data":"c54c24ad851a620e43c4ce50c1b62aec1649ba38125956875df351756f04d5fe"} Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.865893 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" event={"ID":"6bd9b034-cfec-4194-9b45-318ed8625994","Type":"ContainerStarted","Data":"ee18437885fcee5fbdcdf1836aeee21dbf4d050eec9bca79cc30118fce1c00f4"} Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.868123 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9a2fad24b807658d616af8f0a0a2db5b1902464ebffbec16bbec802423d21e6e"} Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.868161 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e8150a97d695d7bd44f0e086c436e146749de71d93969920ba7de70997dec835"} Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.868782 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.874304 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.874697 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.374678895 +0000 UTC m=+150.749126505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.875325 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"68127499466f2ee07c0abd271a8b37104cb9f6686a7394eeb8515271adc3d1ab"} Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.875350 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"536cb3aac01de795d4baed60fdeecb146a25adbe19b5c8d1eaa1ff7f8df7f31a"} Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.955515 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-969vd"] Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.956466 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.965656 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.977060 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:43 crc kubenswrapper[5017]: I0129 06:37:43.978592 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-969vd"] Jan 29 06:37:43 crc kubenswrapper[5017]: E0129 06:37:43.988164 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.488144158 +0000 UTC m=+150.862591768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.078263 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:44 crc kubenswrapper[5017]: E0129 06:37:44.084501 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.584456908 +0000 UTC m=+150.958904518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.085222 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.085331 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-catalog-content\") pod \"community-operators-969vd\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.085519 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-utilities\") pod \"community-operators-969vd\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.085686 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vck8l\" (UniqueName: \"kubernetes.io/projected/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-kube-api-access-vck8l\") pod \"community-operators-969vd\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: E0129 06:37:44.086522 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.586485767 +0000 UTC m=+150.960933377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.154228 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jxrzm"] Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.155741 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.180411 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.184394 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jxrzm"] Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.186755 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.187106 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-utilities\") pod \"community-operators-969vd\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.187149 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-utilities\") pod \"certified-operators-jxrzm\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.187178 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vck8l\" (UniqueName: \"kubernetes.io/projected/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-kube-api-access-vck8l\") pod \"community-operators-969vd\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.187204 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trlr7\" (UniqueName: \"kubernetes.io/projected/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-kube-api-access-trlr7\") pod \"certified-operators-jxrzm\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.187234 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-catalog-content\") pod \"certified-operators-jxrzm\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.187276 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-catalog-content\") pod \"community-operators-969vd\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.187694 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-catalog-content\") pod \"community-operators-969vd\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: E0129 06:37:44.187984 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.687949264 +0000 UTC m=+151.062396874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.188201 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-utilities\") pod \"community-operators-969vd\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.238751 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vck8l\" (UniqueName: \"kubernetes.io/projected/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-kube-api-access-vck8l\") pod \"community-operators-969vd\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.288861 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-utilities\") pod \"certified-operators-jxrzm\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.288939 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trlr7\" (UniqueName: \"kubernetes.io/projected/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-kube-api-access-trlr7\") pod \"certified-operators-jxrzm\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.289001 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-catalog-content\") pod \"certified-operators-jxrzm\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.289055 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.289315 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-utilities\") pod \"certified-operators-jxrzm\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: E0129 06:37:44.289463 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.789448214 +0000 UTC m=+151.163895824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sckkt" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.289536 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-catalog-content\") pod \"certified-operators-jxrzm\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.303008 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-969vd" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.319723 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trlr7\" (UniqueName: \"kubernetes.io/projected/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-kube-api-access-trlr7\") pod \"certified-operators-jxrzm\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.359641 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lgqwh"] Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.360695 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.371545 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgqwh"] Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.390842 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:44 crc kubenswrapper[5017]: E0129 06:37:44.391422 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:37:44.891397845 +0000 UTC m=+151.265845455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.472437 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.479281 5017 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T06:37:43.828168727Z","Handler":null,"Name":""} Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.488254 5017 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.488295 5017 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.493458 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmbk7\" (UniqueName: \"kubernetes.io/projected/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-kube-api-access-dmbk7\") pod \"community-operators-lgqwh\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.493497 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-utilities\") pod \"community-operators-lgqwh\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.493622 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-catalog-content\") pod \"community-operators-lgqwh\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.493705 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.512602 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.512652 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.553154 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rnzhc"] Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.554283 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.565709 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnzhc"] Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.596287 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfth9\" (UniqueName: \"kubernetes.io/projected/3deabecf-5a0e-454c-a622-b3b422c3a6bb-kube-api-access-dfth9\") pod \"certified-operators-rnzhc\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.596344 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-utilities\") pod \"certified-operators-rnzhc\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.596387 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-catalog-content\") pod \"certified-operators-rnzhc\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.596411 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmbk7\" (UniqueName: \"kubernetes.io/projected/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-kube-api-access-dmbk7\") pod \"community-operators-lgqwh\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.596434 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-utilities\") pod \"community-operators-lgqwh\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.596478 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-catalog-content\") pod \"community-operators-lgqwh\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.597289 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-utilities\") pod \"community-operators-lgqwh\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.597507 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-catalog-content\") pod \"community-operators-lgqwh\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.602145 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sckkt\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.630918 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-969vd"] Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.644692 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmbk7\" (UniqueName: \"kubernetes.io/projected/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-kube-api-access-dmbk7\") pod \"community-operators-lgqwh\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.660868 5017 patch_prober.go:28] interesting pod/router-default-5444994796-rlfl6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:37:44 crc kubenswrapper[5017]: [-]has-synced failed: reason withheld Jan 29 06:37:44 crc kubenswrapper[5017]: [+]process-running ok Jan 29 06:37:44 crc kubenswrapper[5017]: healthz check failed Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.660973 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rlfl6" podUID="6e2073b6-4205-42ab-8282-8bb749d7ef3d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.693042 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.693402 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.698654 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.699080 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfth9\" (UniqueName: \"kubernetes.io/projected/3deabecf-5a0e-454c-a622-b3b422c3a6bb-kube-api-access-dfth9\") pod \"certified-operators-rnzhc\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.699123 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-utilities\") pod \"certified-operators-rnzhc\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.699154 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-catalog-content\") pod \"certified-operators-rnzhc\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.699613 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-catalog-content\") pod \"certified-operators-rnzhc\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.700285 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-utilities\") pod \"certified-operators-rnzhc\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.729669 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfth9\" (UniqueName: \"kubernetes.io/projected/3deabecf-5a0e-454c-a622-b3b422c3a6bb-kube-api-access-dfth9\") pod \"certified-operators-rnzhc\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.886321 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.904847 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" event={"ID":"6bd9b034-cfec-4194-9b45-318ed8625994","Type":"ContainerStarted","Data":"b5d46f55dcf81493d9c5c368306d670c8146e24b0edbee81d2b53410c05586aa"} Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.911669 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.911797 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-969vd" event={"ID":"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2","Type":"ContainerStarted","Data":"c7d4574b015ce0704d6160068b72ebf7e3202ea16b60ecc0b055c6d283614c14"} Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.963476 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" podStartSLOduration=11.963450367 podStartE2EDuration="11.963450367s" podCreationTimestamp="2026-01-29 06:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:44.947178879 +0000 UTC m=+151.321626489" watchObservedRunningTime="2026-01-29 06:37:44.963450367 +0000 UTC m=+151.337897977" Jan 29 06:37:44 crc kubenswrapper[5017]: I0129 06:37:44.965180 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jxrzm"] Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.037913 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.050991 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-pd9qw" Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.368458 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sckkt"] Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.555617 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgqwh"] Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.664841 5017 patch_prober.go:28] interesting pod/router-default-5444994796-rlfl6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:37:45 crc kubenswrapper[5017]: [-]has-synced failed: reason withheld Jan 29 06:37:45 crc kubenswrapper[5017]: [+]process-running ok Jan 29 06:37:45 crc kubenswrapper[5017]: healthz check failed Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.664906 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rlfl6" podUID="6e2073b6-4205-42ab-8282-8bb749d7ef3d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.822784 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnzhc"] Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.885046 5017 patch_prober.go:28] interesting pod/downloads-7954f5f757-9nv7f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.885096 5017 patch_prober.go:28] interesting pod/downloads-7954f5f757-9nv7f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.885133 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9nv7f" podUID="0e3a4b4e-acd1-426e-8d48-f2555ced71ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.885153 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9nv7f" podUID="0e3a4b4e-acd1-426e-8d48-f2555ced71ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.915904 5017 generic.go:334] "Generic (PLEG): container finished" podID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerID="d227d4631e82a4a8a838caa548ce81a63fd4d91077ffe67d3c58919820fdd898" exitCode=0 Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.916221 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgqwh" event={"ID":"f35dd9e9-ca07-483a-a5bf-2179d9d705c6","Type":"ContainerDied","Data":"d227d4631e82a4a8a838caa548ce81a63fd4d91077ffe67d3c58919820fdd898"} Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.916304 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgqwh" event={"ID":"f35dd9e9-ca07-483a-a5bf-2179d9d705c6","Type":"ContainerStarted","Data":"cd1f7e90fc6c75952b343e6f8a71e4ea4efbb63818a410cc81377fd90f501065"} Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.918871 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.921211 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" event={"ID":"4d05265b-0d73-42c3-be6a-12198c0109de","Type":"ContainerStarted","Data":"b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33"} Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.921250 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" event={"ID":"4d05265b-0d73-42c3-be6a-12198c0109de","Type":"ContainerStarted","Data":"609dcd668a95023a89412529d38c94b828333ec0e0c480f12e693c3ad9d3cd04"} Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.921521 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.924319 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnzhc" event={"ID":"3deabecf-5a0e-454c-a622-b3b422c3a6bb","Type":"ContainerStarted","Data":"79c574cacd5409f1ceea322251b8c5230cc0f79b4c896f928306ee74fdfb3955"} Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.926128 5017 generic.go:334] "Generic (PLEG): container finished" podID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerID="4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072" exitCode=0 Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.926183 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxrzm" event={"ID":"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304","Type":"ContainerDied","Data":"4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072"} Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.926201 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxrzm" event={"ID":"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304","Type":"ContainerStarted","Data":"4439033efc00639dd2e31ef73ef2d6ced28ccfec1ff6e0c9098e3723e218953a"} Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.935882 5017 generic.go:334] "Generic (PLEG): container finished" podID="7fbf9f0a-9025-4eca-b0d2-00f87df0c16e" containerID="222f60d0ab5105c4428d853d7b9c05abd56bf70465004e58e5a328e648702c30" exitCode=0 Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.935942 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" event={"ID":"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e","Type":"ContainerDied","Data":"222f60d0ab5105c4428d853d7b9c05abd56bf70465004e58e5a328e648702c30"} Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.949465 5017 generic.go:334] "Generic (PLEG): container finished" podID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerID="f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8" exitCode=0 Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.949858 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-969vd" event={"ID":"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2","Type":"ContainerDied","Data":"f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8"} Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.951640 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zcr4n"] Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.952996 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.957781 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 06:37:45 crc kubenswrapper[5017]: I0129 06:37:45.975026 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcr4n"] Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.009034 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.009879 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.015943 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.017085 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.023017 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" podStartSLOduration=131.022983789 podStartE2EDuration="2m11.022983789s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:45.999855353 +0000 UTC m=+152.374302963" watchObservedRunningTime="2026-01-29 06:37:46.022983789 +0000 UTC m=+152.397431399" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.028667 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.028743 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.032545 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.036010 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.036050 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.052019 5017 patch_prober.go:28] interesting pod/console-f9d7485db-z5brc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.052094 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z5brc" podUID="23943ec6-beb6-4bef-b4b1-e5c840ab997b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.129936 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.130023 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtkdc\" (UniqueName: \"kubernetes.io/projected/402aa844-38d7-44aa-bfa8-8db490d3aa4b-kube-api-access-qtkdc\") pod \"redhat-marketplace-zcr4n\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.130064 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.130099 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-utilities\") pod \"redhat-marketplace-zcr4n\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.130580 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-catalog-content\") pod \"redhat-marketplace-zcr4n\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.130627 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.154588 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.155674 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.155709 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.170156 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.231862 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-catalog-content\") pod \"redhat-marketplace-zcr4n\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.232114 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtkdc\" (UniqueName: \"kubernetes.io/projected/402aa844-38d7-44aa-bfa8-8db490d3aa4b-kube-api-access-qtkdc\") pod \"redhat-marketplace-zcr4n\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.232165 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-utilities\") pod \"redhat-marketplace-zcr4n\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.232524 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-catalog-content\") pod \"redhat-marketplace-zcr4n\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.232647 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-utilities\") pod \"redhat-marketplace-zcr4n\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.251870 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtkdc\" (UniqueName: \"kubernetes.io/projected/402aa844-38d7-44aa-bfa8-8db490d3aa4b-kube-api-access-qtkdc\") pod \"redhat-marketplace-zcr4n\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.272721 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.359694 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.360521 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.380915 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8l5h7"] Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.388289 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.401177 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l5h7"] Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.541630 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-catalog-content\") pod \"redhat-marketplace-8l5h7\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.542156 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-utilities\") pod \"redhat-marketplace-8l5h7\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.542206 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrctg\" (UniqueName: \"kubernetes.io/projected/a83caf08-35b0-460b-ba5e-1db0c6cab902-kube-api-access-hrctg\") pod \"redhat-marketplace-8l5h7\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.643708 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrctg\" (UniqueName: \"kubernetes.io/projected/a83caf08-35b0-460b-ba5e-1db0c6cab902-kube-api-access-hrctg\") pod \"redhat-marketplace-8l5h7\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.643824 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-catalog-content\") pod \"redhat-marketplace-8l5h7\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.643904 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-utilities\") pod \"redhat-marketplace-8l5h7\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.645199 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-utilities\") pod \"redhat-marketplace-8l5h7\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.645374 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-catalog-content\") pod \"redhat-marketplace-8l5h7\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.653822 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcr4n"] Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.655434 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.659078 5017 patch_prober.go:28] interesting pod/router-default-5444994796-rlfl6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:37:46 crc kubenswrapper[5017]: [-]has-synced failed: reason withheld Jan 29 06:37:46 crc kubenswrapper[5017]: [+]process-running ok Jan 29 06:37:46 crc kubenswrapper[5017]: healthz check failed Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.659135 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rlfl6" podUID="6e2073b6-4205-42ab-8282-8bb749d7ef3d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.669801 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrctg\" (UniqueName: \"kubernetes.io/projected/a83caf08-35b0-460b-ba5e-1db0c6cab902-kube-api-access-hrctg\") pod \"redhat-marketplace-8l5h7\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: W0129 06:37:46.670902 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod402aa844_38d7_44aa_bfa8_8db490d3aa4b.slice/crio-06459df7bc080380ebcf3afcc4b465453086548a29da725a6500e333f0692a6c WatchSource:0}: Error finding container 06459df7bc080380ebcf3afcc4b465453086548a29da725a6500e333f0692a6c: Status 404 returned error can't find the container with id 06459df7bc080380ebcf3afcc4b465453086548a29da725a6500e333f0692a6c Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.764554 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.967047 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.968590 5017 generic.go:334] "Generic (PLEG): container finished" podID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerID="e4719b5859fe2d77193f7a99b8cf52f589094058414e4305256890882e2bbb1d" exitCode=0 Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.968675 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnzhc" event={"ID":"3deabecf-5a0e-454c-a622-b3b422c3a6bb","Type":"ContainerDied","Data":"e4719b5859fe2d77193f7a99b8cf52f589094058414e4305256890882e2bbb1d"} Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.975314 5017 generic.go:334] "Generic (PLEG): container finished" podID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerID="7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c" exitCode=0 Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.975617 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcr4n" event={"ID":"402aa844-38d7-44aa-bfa8-8db490d3aa4b","Type":"ContainerDied","Data":"7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c"} Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.975647 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcr4n" event={"ID":"402aa844-38d7-44aa-bfa8-8db490d3aa4b","Type":"ContainerStarted","Data":"06459df7bc080380ebcf3afcc4b465453086548a29da725a6500e333f0692a6c"} Jan 29 06:37:46 crc kubenswrapper[5017]: I0129 06:37:46.983313 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zlflf" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.147161 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2cqwl"] Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.149503 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.158619 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cqwl"] Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.158889 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.254742 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87zhg\" (UniqueName: \"kubernetes.io/projected/d8f46fb7-929f-4d96-a5ca-4fc475b78342-kube-api-access-87zhg\") pod \"redhat-operators-2cqwl\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.254815 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-utilities\") pod \"redhat-operators-2cqwl\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.255189 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-catalog-content\") pod \"redhat-operators-2cqwl\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.361236 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87zhg\" (UniqueName: \"kubernetes.io/projected/d8f46fb7-929f-4d96-a5ca-4fc475b78342-kube-api-access-87zhg\") pod \"redhat-operators-2cqwl\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.361322 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-utilities\") pod \"redhat-operators-2cqwl\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.361474 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-catalog-content\") pod \"redhat-operators-2cqwl\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.361879 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-utilities\") pod \"redhat-operators-2cqwl\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.362011 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-catalog-content\") pod \"redhat-operators-2cqwl\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.391270 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87zhg\" (UniqueName: \"kubernetes.io/projected/d8f46fb7-929f-4d96-a5ca-4fc475b78342-kube-api-access-87zhg\") pod \"redhat-operators-2cqwl\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.456856 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.523710 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.555496 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l5h7"] Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.573242 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ljw2"] Jan 29 06:37:47 crc kubenswrapper[5017]: E0129 06:37:47.573770 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbf9f0a-9025-4eca-b0d2-00f87df0c16e" containerName="collect-profiles" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.573784 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbf9f0a-9025-4eca-b0d2-00f87df0c16e" containerName="collect-profiles" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.574022 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbf9f0a-9025-4eca-b0d2-00f87df0c16e" containerName="collect-profiles" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.574523 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkshx\" (UniqueName: \"kubernetes.io/projected/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-kube-api-access-dkshx\") pod \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.574617 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-secret-volume\") pod \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.574701 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-config-volume\") pod \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\" (UID: \"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e\") " Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.578054 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-config-volume" (OuterVolumeSpecName: "config-volume") pod "7fbf9f0a-9025-4eca-b0d2-00f87df0c16e" (UID: "7fbf9f0a-9025-4eca-b0d2-00f87df0c16e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.586264 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.590568 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-kube-api-access-dkshx" (OuterVolumeSpecName: "kube-api-access-dkshx") pod "7fbf9f0a-9025-4eca-b0d2-00f87df0c16e" (UID: "7fbf9f0a-9025-4eca-b0d2-00f87df0c16e"). InnerVolumeSpecName "kube-api-access-dkshx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.590578 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7fbf9f0a-9025-4eca-b0d2-00f87df0c16e" (UID: "7fbf9f0a-9025-4eca-b0d2-00f87df0c16e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.630826 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ljw2"] Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.671414 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.678433 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-catalog-content\") pod \"redhat-operators-4ljw2\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.678498 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-utilities\") pod \"redhat-operators-4ljw2\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.678547 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5z6k\" (UniqueName: \"kubernetes.io/projected/9749d75a-01c6-42c9-a642-8c51895c9cbf-kube-api-access-z5z6k\") pod \"redhat-operators-4ljw2\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.678591 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkshx\" (UniqueName: \"kubernetes.io/projected/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-kube-api-access-dkshx\") on node \"crc\" DevicePath \"\"" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.678602 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.678612 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.679218 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rlfl6" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.783785 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-catalog-content\") pod \"redhat-operators-4ljw2\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.783843 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-utilities\") pod \"redhat-operators-4ljw2\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.783898 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5z6k\" (UniqueName: \"kubernetes.io/projected/9749d75a-01c6-42c9-a642-8c51895c9cbf-kube-api-access-z5z6k\") pod \"redhat-operators-4ljw2\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.785752 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-utilities\") pod \"redhat-operators-4ljw2\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.786103 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-catalog-content\") pod \"redhat-operators-4ljw2\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.841243 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5z6k\" (UniqueName: \"kubernetes.io/projected/9749d75a-01c6-42c9-a642-8c51895c9cbf-kube-api-access-z5z6k\") pod \"redhat-operators-4ljw2\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.934327 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cqwl"] Jan 29 06:37:47 crc kubenswrapper[5017]: I0129 06:37:47.958807 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.067864 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" event={"ID":"7fbf9f0a-9025-4eca-b0d2-00f87df0c16e","Type":"ContainerDied","Data":"c0b1761800414aa48ed7f86fade4fdfa4a56701f011a1788e821f89bfd9bc12f"} Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.067913 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b1761800414aa48ed7f86fade4fdfa4a56701f011a1788e821f89bfd9bc12f" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.068065 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.073977 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d75df544-700b-4972-b1db-4f5d6ddf1ab7","Type":"ContainerStarted","Data":"ea46038a4b077e3e7c37007606f6917841573faf9fb30d407f16f723b1f8ad96"} Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.092031 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cqwl" event={"ID":"d8f46fb7-929f-4d96-a5ca-4fc475b78342","Type":"ContainerStarted","Data":"17185ad7dd95fe02645fee9ec954fbd2a276134de547f0bae383404856e7fa4d"} Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.103259 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l5h7" event={"ID":"a83caf08-35b0-460b-ba5e-1db0c6cab902","Type":"ContainerStarted","Data":"2b9100cb26692555c45b78da1ebcf6a0ff0fea60067691821b157d31b026e7a0"} Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.557216 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.558564 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.564195 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.568933 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.569206 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.647580 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ljw2"] Jan 29 06:37:48 crc kubenswrapper[5017]: W0129 06:37:48.700638 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9749d75a_01c6_42c9_a642_8c51895c9cbf.slice/crio-c4a0569b92481e287993ad4bab3712074e37ef86550ba852879c4a29f6896375 WatchSource:0}: Error finding container c4a0569b92481e287993ad4bab3712074e37ef86550ba852879c4a29f6896375: Status 404 returned error can't find the container with id c4a0569b92481e287993ad4bab3712074e37ef86550ba852879c4a29f6896375 Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.728459 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91035c25-228e-4c64-8f80-3f909c24299a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"91035c25-228e-4c64-8f80-3f909c24299a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.728500 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91035c25-228e-4c64-8f80-3f909c24299a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"91035c25-228e-4c64-8f80-3f909c24299a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.829527 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91035c25-228e-4c64-8f80-3f909c24299a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"91035c25-228e-4c64-8f80-3f909c24299a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.829595 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91035c25-228e-4c64-8f80-3f909c24299a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"91035c25-228e-4c64-8f80-3f909c24299a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.829723 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91035c25-228e-4c64-8f80-3f909c24299a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"91035c25-228e-4c64-8f80-3f909c24299a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.853717 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91035c25-228e-4c64-8f80-3f909c24299a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"91035c25-228e-4c64-8f80-3f909c24299a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:48 crc kubenswrapper[5017]: I0129 06:37:48.936383 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.179225 5017 generic.go:334] "Generic (PLEG): container finished" podID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerID="c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56" exitCode=0 Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.179331 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l5h7" event={"ID":"a83caf08-35b0-460b-ba5e-1db0c6cab902","Type":"ContainerDied","Data":"c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56"} Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.185579 5017 generic.go:334] "Generic (PLEG): container finished" podID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerID="b6ed2b77f1234f13601058ef29bda01ffe04a830f5d71be8fe6d27b681c6175e" exitCode=0 Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.185653 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljw2" event={"ID":"9749d75a-01c6-42c9-a642-8c51895c9cbf","Type":"ContainerDied","Data":"b6ed2b77f1234f13601058ef29bda01ffe04a830f5d71be8fe6d27b681c6175e"} Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.185682 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljw2" event={"ID":"9749d75a-01c6-42c9-a642-8c51895c9cbf","Type":"ContainerStarted","Data":"c4a0569b92481e287993ad4bab3712074e37ef86550ba852879c4a29f6896375"} Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.188591 5017 generic.go:334] "Generic (PLEG): container finished" podID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerID="903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5" exitCode=0 Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.188662 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cqwl" event={"ID":"d8f46fb7-929f-4d96-a5ca-4fc475b78342","Type":"ContainerDied","Data":"903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5"} Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.192910 5017 generic.go:334] "Generic (PLEG): container finished" podID="d75df544-700b-4972-b1db-4f5d6ddf1ab7" containerID="e5afe606c4228e995449e3f172b9366bc7543ce5a0e62448f643ac056e39d2d9" exitCode=0 Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.192936 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d75df544-700b-4972-b1db-4f5d6ddf1ab7","Type":"ContainerDied","Data":"e5afe606c4228e995449e3f172b9366bc7543ce5a0e62448f643ac056e39d2d9"} Jan 29 06:37:49 crc kubenswrapper[5017]: I0129 06:37:49.313756 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 06:37:49 crc kubenswrapper[5017]: W0129 06:37:49.327180 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod91035c25_228e_4c64_8f80_3f909c24299a.slice/crio-71961e758c9bfd656ea71fb5641330170b3f67b8973b0e89d5fbad33a9c35760 WatchSource:0}: Error finding container 71961e758c9bfd656ea71fb5641330170b3f67b8973b0e89d5fbad33a9c35760: Status 404 returned error can't find the container with id 71961e758c9bfd656ea71fb5641330170b3f67b8973b0e89d5fbad33a9c35760 Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.225427 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"91035c25-228e-4c64-8f80-3f909c24299a","Type":"ContainerStarted","Data":"708f70aed2b6c33847d767b838bb2e29eaeecaa54448f570d826c7db81d88186"} Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.226114 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"91035c25-228e-4c64-8f80-3f909c24299a","Type":"ContainerStarted","Data":"71961e758c9bfd656ea71fb5641330170b3f67b8973b0e89d5fbad33a9c35760"} Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.253770 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.2537493619999998 podStartE2EDuration="2.253749362s" podCreationTimestamp="2026-01-29 06:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:37:50.249909843 +0000 UTC m=+156.624357473" watchObservedRunningTime="2026-01-29 06:37:50.253749362 +0000 UTC m=+156.628196982" Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.669269 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.795527 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kubelet-dir\") pod \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\" (UID: \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\") " Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.795638 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kube-api-access\") pod \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\" (UID: \"d75df544-700b-4972-b1db-4f5d6ddf1ab7\") " Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.795663 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d75df544-700b-4972-b1db-4f5d6ddf1ab7" (UID: "d75df544-700b-4972-b1db-4f5d6ddf1ab7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.795905 5017 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.817217 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d75df544-700b-4972-b1db-4f5d6ddf1ab7" (UID: "d75df544-700b-4972-b1db-4f5d6ddf1ab7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:37:50 crc kubenswrapper[5017]: I0129 06:37:50.905946 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d75df544-700b-4972-b1db-4f5d6ddf1ab7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:37:51 crc kubenswrapper[5017]: I0129 06:37:51.271749 5017 generic.go:334] "Generic (PLEG): container finished" podID="91035c25-228e-4c64-8f80-3f909c24299a" containerID="708f70aed2b6c33847d767b838bb2e29eaeecaa54448f570d826c7db81d88186" exitCode=0 Jan 29 06:37:51 crc kubenswrapper[5017]: I0129 06:37:51.272123 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"91035c25-228e-4c64-8f80-3f909c24299a","Type":"ContainerDied","Data":"708f70aed2b6c33847d767b838bb2e29eaeecaa54448f570d826c7db81d88186"} Jan 29 06:37:51 crc kubenswrapper[5017]: I0129 06:37:51.281896 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d75df544-700b-4972-b1db-4f5d6ddf1ab7","Type":"ContainerDied","Data":"ea46038a4b077e3e7c37007606f6917841573faf9fb30d407f16f723b1f8ad96"} Jan 29 06:37:51 crc kubenswrapper[5017]: I0129 06:37:51.281938 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea46038a4b077e3e7c37007606f6917841573faf9fb30d407f16f723b1f8ad96" Jan 29 06:37:51 crc kubenswrapper[5017]: I0129 06:37:51.282020 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:37:51 crc kubenswrapper[5017]: I0129 06:37:51.538258 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fhx55" Jan 29 06:37:52 crc kubenswrapper[5017]: I0129 06:37:52.644334 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:52 crc kubenswrapper[5017]: I0129 06:37:52.653904 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91035c25-228e-4c64-8f80-3f909c24299a-kubelet-dir\") pod \"91035c25-228e-4c64-8f80-3f909c24299a\" (UID: \"91035c25-228e-4c64-8f80-3f909c24299a\") " Jan 29 06:37:52 crc kubenswrapper[5017]: I0129 06:37:52.653946 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91035c25-228e-4c64-8f80-3f909c24299a-kube-api-access\") pod \"91035c25-228e-4c64-8f80-3f909c24299a\" (UID: \"91035c25-228e-4c64-8f80-3f909c24299a\") " Jan 29 06:37:52 crc kubenswrapper[5017]: I0129 06:37:52.654557 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91035c25-228e-4c64-8f80-3f909c24299a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "91035c25-228e-4c64-8f80-3f909c24299a" (UID: "91035c25-228e-4c64-8f80-3f909c24299a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:37:52 crc kubenswrapper[5017]: I0129 06:37:52.669708 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91035c25-228e-4c64-8f80-3f909c24299a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "91035c25-228e-4c64-8f80-3f909c24299a" (UID: "91035c25-228e-4c64-8f80-3f909c24299a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:37:52 crc kubenswrapper[5017]: I0129 06:37:52.755701 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91035c25-228e-4c64-8f80-3f909c24299a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:37:52 crc kubenswrapper[5017]: I0129 06:37:52.755741 5017 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91035c25-228e-4c64-8f80-3f909c24299a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:37:53 crc kubenswrapper[5017]: I0129 06:37:53.307025 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"91035c25-228e-4c64-8f80-3f909c24299a","Type":"ContainerDied","Data":"71961e758c9bfd656ea71fb5641330170b3f67b8973b0e89d5fbad33a9c35760"} Jan 29 06:37:53 crc kubenswrapper[5017]: I0129 06:37:53.307084 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71961e758c9bfd656ea71fb5641330170b3f67b8973b0e89d5fbad33a9c35760" Jan 29 06:37:53 crc kubenswrapper[5017]: I0129 06:37:53.307104 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:37:55 crc kubenswrapper[5017]: I0129 06:37:55.893685 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9nv7f" Jan 29 06:37:56 crc kubenswrapper[5017]: I0129 06:37:56.039544 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:56 crc kubenswrapper[5017]: I0129 06:37:56.043823 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:37:56 crc kubenswrapper[5017]: I0129 06:37:56.539081 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:37:56 crc kubenswrapper[5017]: I0129 06:37:56.539151 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:37:57 crc kubenswrapper[5017]: I0129 06:37:57.164626 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:57 crc kubenswrapper[5017]: I0129 06:37:57.175331 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f-metrics-certs\") pod \"network-metrics-daemon-xn4bq\" (UID: \"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f\") " pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:57 crc kubenswrapper[5017]: I0129 06:37:57.243726 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zqqn"] Jan 29 06:37:57 crc kubenswrapper[5017]: I0129 06:37:57.244004 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" podUID="d21d2c22-5085-4712-a8d5-de95dc8a69b3" containerName="controller-manager" containerID="cri-o://e7e400cbd3423e76eaf410a1a7d86bd7bb8a9f1ce8405d781ab090852b44223a" gracePeriod=30 Jan 29 06:37:57 crc kubenswrapper[5017]: I0129 06:37:57.258628 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch"] Jan 29 06:37:57 crc kubenswrapper[5017]: I0129 06:37:57.258874 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" podUID="3641c614-3691-442a-95e4-13582cfd16d2" containerName="route-controller-manager" containerID="cri-o://6215bef46e327fd950b2ee707c27eaed5f5060258802d409b3e0e891d9b98f28" gracePeriod=30 Jan 29 06:37:57 crc kubenswrapper[5017]: I0129 06:37:57.432750 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xn4bq" Jan 29 06:37:58 crc kubenswrapper[5017]: I0129 06:37:58.411556 5017 generic.go:334] "Generic (PLEG): container finished" podID="d21d2c22-5085-4712-a8d5-de95dc8a69b3" containerID="e7e400cbd3423e76eaf410a1a7d86bd7bb8a9f1ce8405d781ab090852b44223a" exitCode=0 Jan 29 06:37:58 crc kubenswrapper[5017]: I0129 06:37:58.411606 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" event={"ID":"d21d2c22-5085-4712-a8d5-de95dc8a69b3","Type":"ContainerDied","Data":"e7e400cbd3423e76eaf410a1a7d86bd7bb8a9f1ce8405d781ab090852b44223a"} Jan 29 06:38:03 crc kubenswrapper[5017]: I0129 06:38:03.444872 5017 generic.go:334] "Generic (PLEG): container finished" podID="3641c614-3691-442a-95e4-13582cfd16d2" containerID="6215bef46e327fd950b2ee707c27eaed5f5060258802d409b3e0e891d9b98f28" exitCode=0 Jan 29 06:38:03 crc kubenswrapper[5017]: I0129 06:38:03.444923 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" event={"ID":"3641c614-3691-442a-95e4-13582cfd16d2","Type":"ContainerDied","Data":"6215bef46e327fd950b2ee707c27eaed5f5060258802d409b3e0e891d9b98f28"} Jan 29 06:38:04 crc kubenswrapper[5017]: I0129 06:38:04.699885 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:38:07 crc kubenswrapper[5017]: I0129 06:38:07.709831 5017 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5zqqn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 06:38:07 crc kubenswrapper[5017]: I0129 06:38:07.710847 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" podUID="d21d2c22-5085-4712-a8d5-de95dc8a69b3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 06:38:07 crc kubenswrapper[5017]: I0129 06:38:07.909427 5017 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wmnch container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 06:38:07 crc kubenswrapper[5017]: I0129 06:38:07.909504 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" podUID="3641c614-3691-442a-95e4-13582cfd16d2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.868209 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.923932 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-llbq4"] Jan 29 06:38:11 crc kubenswrapper[5017]: E0129 06:38:11.924446 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d75df544-700b-4972-b1db-4f5d6ddf1ab7" containerName="pruner" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.924474 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75df544-700b-4972-b1db-4f5d6ddf1ab7" containerName="pruner" Jan 29 06:38:11 crc kubenswrapper[5017]: E0129 06:38:11.924496 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21d2c22-5085-4712-a8d5-de95dc8a69b3" containerName="controller-manager" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.924510 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21d2c22-5085-4712-a8d5-de95dc8a69b3" containerName="controller-manager" Jan 29 06:38:11 crc kubenswrapper[5017]: E0129 06:38:11.924528 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91035c25-228e-4c64-8f80-3f909c24299a" containerName="pruner" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.924539 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="91035c25-228e-4c64-8f80-3f909c24299a" containerName="pruner" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.924702 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21d2c22-5085-4712-a8d5-de95dc8a69b3" containerName="controller-manager" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.924718 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d75df544-700b-4972-b1db-4f5d6ddf1ab7" containerName="pruner" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.924736 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="91035c25-228e-4c64-8f80-3f909c24299a" containerName="pruner" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.925397 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.926577 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-client-ca\") pod \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.929258 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "d21d2c22-5085-4712-a8d5-de95dc8a69b3" (UID: "d21d2c22-5085-4712-a8d5-de95dc8a69b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.930544 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-proxy-ca-bundles\") pod \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.931100 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d21d2c22-5085-4712-a8d5-de95dc8a69b3" (UID: "d21d2c22-5085-4712-a8d5-de95dc8a69b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.931167 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d21d2c22-5085-4712-a8d5-de95dc8a69b3-serving-cert\") pod \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.931231 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-llbq4"] Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.932247 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sbhv\" (UniqueName: \"kubernetes.io/projected/d21d2c22-5085-4712-a8d5-de95dc8a69b3-kube-api-access-9sbhv\") pod \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.932331 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-config\") pod \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\" (UID: \"d21d2c22-5085-4712-a8d5-de95dc8a69b3\") " Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.932695 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcqw5\" (UniqueName: \"kubernetes.io/projected/d39be8cf-0d88-454e-ada8-9c846f90f4b6-kube-api-access-wcqw5\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.932776 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-config\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.932821 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39be8cf-0d88-454e-ada8-9c846f90f4b6-serving-cert\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.932852 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-proxy-ca-bundles\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.932917 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-client-ca\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.933280 5017 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.933308 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.934119 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-config" (OuterVolumeSpecName: "config") pod "d21d2c22-5085-4712-a8d5-de95dc8a69b3" (UID: "d21d2c22-5085-4712-a8d5-de95dc8a69b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.947724 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21d2c22-5085-4712-a8d5-de95dc8a69b3-kube-api-access-9sbhv" (OuterVolumeSpecName: "kube-api-access-9sbhv") pod "d21d2c22-5085-4712-a8d5-de95dc8a69b3" (UID: "d21d2c22-5085-4712-a8d5-de95dc8a69b3"). InnerVolumeSpecName "kube-api-access-9sbhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.957977 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21d2c22-5085-4712-a8d5-de95dc8a69b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d21d2c22-5085-4712-a8d5-de95dc8a69b3" (UID: "d21d2c22-5085-4712-a8d5-de95dc8a69b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:11 crc kubenswrapper[5017]: I0129 06:38:11.960102 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.034671 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-config\") pod \"3641c614-3691-442a-95e4-13582cfd16d2\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.034800 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-client-ca\") pod \"3641c614-3691-442a-95e4-13582cfd16d2\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.034912 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4crh5\" (UniqueName: \"kubernetes.io/projected/3641c614-3691-442a-95e4-13582cfd16d2-kube-api-access-4crh5\") pod \"3641c614-3691-442a-95e4-13582cfd16d2\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.035001 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3641c614-3691-442a-95e4-13582cfd16d2-serving-cert\") pod \"3641c614-3691-442a-95e4-13582cfd16d2\" (UID: \"3641c614-3691-442a-95e4-13582cfd16d2\") " Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.035301 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcqw5\" (UniqueName: \"kubernetes.io/projected/d39be8cf-0d88-454e-ada8-9c846f90f4b6-kube-api-access-wcqw5\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.035348 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-config\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.035373 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39be8cf-0d88-454e-ada8-9c846f90f4b6-serving-cert\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.035391 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-proxy-ca-bundles\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.035425 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-client-ca\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.035479 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d21d2c22-5085-4712-a8d5-de95dc8a69b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.035492 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sbhv\" (UniqueName: \"kubernetes.io/projected/d21d2c22-5085-4712-a8d5-de95dc8a69b3-kube-api-access-9sbhv\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.035508 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d21d2c22-5085-4712-a8d5-de95dc8a69b3-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.036588 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-client-ca\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.037194 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "3641c614-3691-442a-95e4-13582cfd16d2" (UID: "3641c614-3691-442a-95e4-13582cfd16d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.037562 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-config" (OuterVolumeSpecName: "config") pod "3641c614-3691-442a-95e4-13582cfd16d2" (UID: "3641c614-3691-442a-95e4-13582cfd16d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.038052 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-config\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.038062 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-proxy-ca-bundles\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.041581 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39be8cf-0d88-454e-ada8-9c846f90f4b6-serving-cert\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.042096 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3641c614-3691-442a-95e4-13582cfd16d2-kube-api-access-4crh5" (OuterVolumeSpecName: "kube-api-access-4crh5") pod "3641c614-3691-442a-95e4-13582cfd16d2" (UID: "3641c614-3691-442a-95e4-13582cfd16d2"). InnerVolumeSpecName "kube-api-access-4crh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.044764 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3641c614-3691-442a-95e4-13582cfd16d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3641c614-3691-442a-95e4-13582cfd16d2" (UID: "3641c614-3691-442a-95e4-13582cfd16d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.056663 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcqw5\" (UniqueName: \"kubernetes.io/projected/d39be8cf-0d88-454e-ada8-9c846f90f4b6-kube-api-access-wcqw5\") pod \"controller-manager-78bc4cb977-llbq4\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.136497 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4crh5\" (UniqueName: \"kubernetes.io/projected/3641c614-3691-442a-95e4-13582cfd16d2-kube-api-access-4crh5\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.136532 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3641c614-3691-442a-95e4-13582cfd16d2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.136544 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.136553 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3641c614-3691-442a-95e4-13582cfd16d2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.273309 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.497270 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" event={"ID":"3641c614-3691-442a-95e4-13582cfd16d2","Type":"ContainerDied","Data":"620d8a4577172d8cf653e70941a47d1724fbbbd222dd5fdb960a02ed3731da5b"} Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.497328 5017 scope.go:117] "RemoveContainer" containerID="6215bef46e327fd950b2ee707c27eaed5f5060258802d409b3e0e891d9b98f28" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.497425 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.502652 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" event={"ID":"d21d2c22-5085-4712-a8d5-de95dc8a69b3","Type":"ContainerDied","Data":"a70dfab99b595854774ec19d1f688f6433ce3743bf735f78cc3b3c8f33435ab6"} Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.502733 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5zqqn" Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.516054 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch"] Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.522345 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wmnch"] Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.536772 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zqqn"] Jan 29 06:38:12 crc kubenswrapper[5017]: I0129 06:38:12.540305 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5zqqn"] Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.328237 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3641c614-3691-442a-95e4-13582cfd16d2" path="/var/lib/kubelet/pods/3641c614-3691-442a-95e4-13582cfd16d2/volumes" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.329430 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d21d2c22-5085-4712-a8d5-de95dc8a69b3" path="/var/lib/kubelet/pods/d21d2c22-5085-4712-a8d5-de95dc8a69b3/volumes" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.869448 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx"] Jan 29 06:38:14 crc kubenswrapper[5017]: E0129 06:38:14.869761 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3641c614-3691-442a-95e4-13582cfd16d2" containerName="route-controller-manager" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.869779 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3641c614-3691-442a-95e4-13582cfd16d2" containerName="route-controller-manager" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.869885 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3641c614-3691-442a-95e4-13582cfd16d2" containerName="route-controller-manager" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.870373 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.873692 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.873828 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.873940 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.874126 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.874358 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.878054 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-client-ca\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.878094 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-config\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.878118 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbf1855-e8ae-4358-9712-88f925636310-serving-cert\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.878143 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7gc\" (UniqueName: \"kubernetes.io/projected/7dbf1855-e8ae-4358-9712-88f925636310-kube-api-access-mj7gc\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.878847 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx"] Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.880455 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.979516 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj7gc\" (UniqueName: \"kubernetes.io/projected/7dbf1855-e8ae-4358-9712-88f925636310-kube-api-access-mj7gc\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.979648 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-client-ca\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.979684 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-config\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.983845 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbf1855-e8ae-4358-9712-88f925636310-serving-cert\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.985554 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-config\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.989999 5017 scope.go:117] "RemoveContainer" containerID="e7e400cbd3423e76eaf410a1a7d86bd7bb8a9f1ce8405d781ab090852b44223a" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.991041 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbf1855-e8ae-4358-9712-88f925636310-serving-cert\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:14 crc kubenswrapper[5017]: I0129 06:38:14.993640 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-client-ca\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.000021 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.000228 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtkdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zcr4n_openshift-marketplace(402aa844-38d7-44aa-bfa8-8db490d3aa4b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.001758 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zcr4n" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.005182 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj7gc\" (UniqueName: \"kubernetes.io/projected/7dbf1855-e8ae-4358-9712-88f925636310-kube-api-access-mj7gc\") pod \"route-controller-manager-7c588587d7-ch2nx\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.025400 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.025609 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vck8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-969vd_openshift-marketplace(95fabedb-25f2-43b3-a1dc-907c7e3ad4c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.027703 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-969vd" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.057460 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.058210 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrctg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8l5h7_openshift-marketplace(a83caf08-35b0-460b-ba5e-1db0c6cab902): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.060103 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8l5h7" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.175776 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.175932 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87zhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2cqwl_openshift-marketplace(d8f46fb7-929f-4d96-a5ca-4fc475b78342): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.178069 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2cqwl" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.194301 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.227535 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.228260 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trlr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jxrzm_openshift-marketplace(515e73a4-3f9f-40aa-bd4b-c4ac2d55f304): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.230229 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jxrzm" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.249299 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xn4bq"] Jan 29 06:38:15 crc kubenswrapper[5017]: W0129 06:38:15.263804 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c1eddbb_5454_45d2_bd9f_cf89bfe79b8f.slice/crio-fc895608fcbb1ba290f2a56eed16e6a7967e631e183d24cd178a03317b366836 WatchSource:0}: Error finding container fc895608fcbb1ba290f2a56eed16e6a7967e631e183d24cd178a03317b366836: Status 404 returned error can't find the container with id fc895608fcbb1ba290f2a56eed16e6a7967e631e183d24cd178a03317b366836 Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.464373 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx"] Jan 29 06:38:15 crc kubenswrapper[5017]: W0129 06:38:15.485130 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dbf1855_e8ae_4358_9712_88f925636310.slice/crio-112b7b9729d5c76d825c2ea8f1ce1c9bf004abcdcc7c02d2bb4474342b5a0b67 WatchSource:0}: Error finding container 112b7b9729d5c76d825c2ea8f1ce1c9bf004abcdcc7c02d2bb4474342b5a0b67: Status 404 returned error can't find the container with id 112b7b9729d5c76d825c2ea8f1ce1c9bf004abcdcc7c02d2bb4474342b5a0b67 Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.525821 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnzhc" event={"ID":"3deabecf-5a0e-454c-a622-b3b422c3a6bb","Type":"ContainerStarted","Data":"f273d5d72547b80f864bb2a640f8ae5005bae48e30db37e7065e11ec6143facd"} Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.530263 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" event={"ID":"7dbf1855-e8ae-4358-9712-88f925636310","Type":"ContainerStarted","Data":"112b7b9729d5c76d825c2ea8f1ce1c9bf004abcdcc7c02d2bb4474342b5a0b67"} Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.532039 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" event={"ID":"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f","Type":"ContainerStarted","Data":"fc895608fcbb1ba290f2a56eed16e6a7967e631e183d24cd178a03317b366836"} Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.536254 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljw2" event={"ID":"9749d75a-01c6-42c9-a642-8c51895c9cbf","Type":"ContainerStarted","Data":"7bf1a17713c7ce6bec4714e5b284f357fc8be0bdbbfd9487714071dc4b036686"} Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.545977 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgqwh" event={"ID":"f35dd9e9-ca07-483a-a5bf-2179d9d705c6","Type":"ContainerStarted","Data":"7503daa34c084e53086b265e61a6f43b1d863854c6e943d90f8a248a6f0c5984"} Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.554637 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8l5h7" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.554698 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zcr4n" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.554809 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2cqwl" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.555228 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-969vd" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" Jan 29 06:38:15 crc kubenswrapper[5017]: I0129 06:38:15.559019 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-llbq4"] Jan 29 06:38:15 crc kubenswrapper[5017]: E0129 06:38:15.561715 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jxrzm" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.503511 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chjv7" Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.563214 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" event={"ID":"7dbf1855-e8ae-4358-9712-88f925636310","Type":"ContainerStarted","Data":"ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca"} Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.564340 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.567742 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" event={"ID":"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f","Type":"ContainerStarted","Data":"88fa1a5cfd4a56e814538448cf5a56df800d088a02c05be23d306ca626e30415"} Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.567792 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xn4bq" event={"ID":"0c1eddbb-5454-45d2-bd9f-cf89bfe79b8f","Type":"ContainerStarted","Data":"31f768364167022e99cc5f1a722edc9cc057aa8797017a36734d3790bc2e62dd"} Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.569829 5017 generic.go:334] "Generic (PLEG): container finished" podID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerID="7bf1a17713c7ce6bec4714e5b284f357fc8be0bdbbfd9487714071dc4b036686" exitCode=0 Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.569914 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljw2" event={"ID":"9749d75a-01c6-42c9-a642-8c51895c9cbf","Type":"ContainerDied","Data":"7bf1a17713c7ce6bec4714e5b284f357fc8be0bdbbfd9487714071dc4b036686"} Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.574338 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.580421 5017 generic.go:334] "Generic (PLEG): container finished" podID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerID="7503daa34c084e53086b265e61a6f43b1d863854c6e943d90f8a248a6f0c5984" exitCode=0 Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.580520 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgqwh" event={"ID":"f35dd9e9-ca07-483a-a5bf-2179d9d705c6","Type":"ContainerDied","Data":"7503daa34c084e53086b265e61a6f43b1d863854c6e943d90f8a248a6f0c5984"} Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.592307 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" event={"ID":"d39be8cf-0d88-454e-ada8-9c846f90f4b6","Type":"ContainerStarted","Data":"2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23"} Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.592350 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" event={"ID":"d39be8cf-0d88-454e-ada8-9c846f90f4b6","Type":"ContainerStarted","Data":"43919f189a8f6420b1bfc239bd518c95f51718362950c466ef1284dcf83e220b"} Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.593490 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.601110 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.612339 5017 generic.go:334] "Generic (PLEG): container finished" podID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerID="f273d5d72547b80f864bb2a640f8ae5005bae48e30db37e7065e11ec6143facd" exitCode=0 Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.612691 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnzhc" event={"ID":"3deabecf-5a0e-454c-a622-b3b422c3a6bb","Type":"ContainerDied","Data":"f273d5d72547b80f864bb2a640f8ae5005bae48e30db37e7065e11ec6143facd"} Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.613830 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" podStartSLOduration=19.613807373 podStartE2EDuration="19.613807373s" podCreationTimestamp="2026-01-29 06:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:16.589334189 +0000 UTC m=+182.963781799" watchObservedRunningTime="2026-01-29 06:38:16.613807373 +0000 UTC m=+182.988254983" Jan 29 06:38:16 crc kubenswrapper[5017]: I0129 06:38:16.697170 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" podStartSLOduration=19.69713641 podStartE2EDuration="19.69713641s" podCreationTimestamp="2026-01-29 06:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:16.692462245 +0000 UTC m=+183.066909845" watchObservedRunningTime="2026-01-29 06:38:16.69713641 +0000 UTC m=+183.071584020" Jan 29 06:38:17 crc kubenswrapper[5017]: I0129 06:38:17.222405 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-llbq4"] Jan 29 06:38:17 crc kubenswrapper[5017]: I0129 06:38:17.330661 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx"] Jan 29 06:38:18 crc kubenswrapper[5017]: I0129 06:38:18.641165 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgqwh" event={"ID":"f35dd9e9-ca07-483a-a5bf-2179d9d705c6","Type":"ContainerStarted","Data":"52dcca43f80ae265bbf4bcebcc47fda0d53941739034da48bb3d28a2f0d75e2b"} Jan 29 06:38:18 crc kubenswrapper[5017]: I0129 06:38:18.647169 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnzhc" event={"ID":"3deabecf-5a0e-454c-a622-b3b422c3a6bb","Type":"ContainerStarted","Data":"bac0a4d113a6b0bcd16f61fdd85a3b8def383761a73640cafcaeb5c96bcef7ad"} Jan 29 06:38:18 crc kubenswrapper[5017]: I0129 06:38:18.656331 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljw2" event={"ID":"9749d75a-01c6-42c9-a642-8c51895c9cbf","Type":"ContainerStarted","Data":"34d823b1b9240527cbfbd56ab5eb7f8fe49c9a3a3d383676bb9c45ad8460b537"} Jan 29 06:38:18 crc kubenswrapper[5017]: I0129 06:38:18.656488 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" podUID="7dbf1855-e8ae-4358-9712-88f925636310" containerName="route-controller-manager" containerID="cri-o://ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca" gracePeriod=30 Jan 29 06:38:18 crc kubenswrapper[5017]: I0129 06:38:18.656699 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" podUID="d39be8cf-0d88-454e-ada8-9c846f90f4b6" containerName="controller-manager" containerID="cri-o://2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23" gracePeriod=30 Jan 29 06:38:18 crc kubenswrapper[5017]: I0129 06:38:18.665563 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xn4bq" podStartSLOduration=163.665534459 podStartE2EDuration="2m43.665534459s" podCreationTimestamp="2026-01-29 06:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:17.657817678 +0000 UTC m=+184.032265288" watchObservedRunningTime="2026-01-29 06:38:18.665534459 +0000 UTC m=+185.039982069" Jan 29 06:38:18 crc kubenswrapper[5017]: I0129 06:38:18.695827 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lgqwh" podStartSLOduration=2.939810776 podStartE2EDuration="34.6958043s" podCreationTimestamp="2026-01-29 06:37:44 +0000 UTC" firstStartedPulling="2026-01-29 06:37:45.918476983 +0000 UTC m=+152.292924593" lastFinishedPulling="2026-01-29 06:38:17.674470507 +0000 UTC m=+184.048918117" observedRunningTime="2026-01-29 06:38:18.66766702 +0000 UTC m=+185.042114630" watchObservedRunningTime="2026-01-29 06:38:18.6958043 +0000 UTC m=+185.070251910" Jan 29 06:38:18 crc kubenswrapper[5017]: I0129 06:38:18.700815 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ljw2" podStartSLOduration=3.296331558 podStartE2EDuration="31.700802214s" podCreationTimestamp="2026-01-29 06:37:47 +0000 UTC" firstStartedPulling="2026-01-29 06:37:49.187528989 +0000 UTC m=+155.561976599" lastFinishedPulling="2026-01-29 06:38:17.591999625 +0000 UTC m=+183.966447255" observedRunningTime="2026-01-29 06:38:18.698203469 +0000 UTC m=+185.072651079" watchObservedRunningTime="2026-01-29 06:38:18.700802214 +0000 UTC m=+185.075249824" Jan 29 06:38:18 crc kubenswrapper[5017]: I0129 06:38:18.736739 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rnzhc" podStartSLOduration=3.968020537 podStartE2EDuration="34.736715286s" podCreationTimestamp="2026-01-29 06:37:44 +0000 UTC" firstStartedPulling="2026-01-29 06:37:46.970613502 +0000 UTC m=+153.345061112" lastFinishedPulling="2026-01-29 06:38:17.739308251 +0000 UTC m=+184.113755861" observedRunningTime="2026-01-29 06:38:18.733488193 +0000 UTC m=+185.107935823" watchObservedRunningTime="2026-01-29 06:38:18.736715286 +0000 UTC m=+185.111162896" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.130660 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.149187 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbf1855-e8ae-4358-9712-88f925636310-serving-cert\") pod \"7dbf1855-e8ae-4358-9712-88f925636310\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.149246 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-config\") pod \"7dbf1855-e8ae-4358-9712-88f925636310\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.149331 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj7gc\" (UniqueName: \"kubernetes.io/projected/7dbf1855-e8ae-4358-9712-88f925636310-kube-api-access-mj7gc\") pod \"7dbf1855-e8ae-4358-9712-88f925636310\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.149388 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-client-ca\") pod \"7dbf1855-e8ae-4358-9712-88f925636310\" (UID: \"7dbf1855-e8ae-4358-9712-88f925636310\") " Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.151264 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-config" (OuterVolumeSpecName: "config") pod "7dbf1855-e8ae-4358-9712-88f925636310" (UID: "7dbf1855-e8ae-4358-9712-88f925636310"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.151394 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-client-ca" (OuterVolumeSpecName: "client-ca") pod "7dbf1855-e8ae-4358-9712-88f925636310" (UID: "7dbf1855-e8ae-4358-9712-88f925636310"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.159413 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbf1855-e8ae-4358-9712-88f925636310-kube-api-access-mj7gc" (OuterVolumeSpecName: "kube-api-access-mj7gc") pod "7dbf1855-e8ae-4358-9712-88f925636310" (UID: "7dbf1855-e8ae-4358-9712-88f925636310"). InnerVolumeSpecName "kube-api-access-mj7gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.167937 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbf1855-e8ae-4358-9712-88f925636310-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7dbf1855-e8ae-4358-9712-88f925636310" (UID: "7dbf1855-e8ae-4358-9712-88f925636310"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.172262 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws"] Jan 29 06:38:19 crc kubenswrapper[5017]: E0129 06:38:19.172546 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbf1855-e8ae-4358-9712-88f925636310" containerName="route-controller-manager" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.172566 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbf1855-e8ae-4358-9712-88f925636310" containerName="route-controller-manager" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.172688 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbf1855-e8ae-4358-9712-88f925636310" containerName="route-controller-manager" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.173158 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.187615 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws"] Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.215001 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.250385 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcqw5\" (UniqueName: \"kubernetes.io/projected/d39be8cf-0d88-454e-ada8-9c846f90f4b6-kube-api-access-wcqw5\") pod \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.250503 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-config\") pod \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.250570 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-proxy-ca-bundles\") pod \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.250606 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39be8cf-0d88-454e-ada8-9c846f90f4b6-serving-cert\") pod \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.250660 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-client-ca\") pod \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\" (UID: \"d39be8cf-0d88-454e-ada8-9c846f90f4b6\") " Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.250871 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-config\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.250901 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-client-ca\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.250972 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjfp6\" (UniqueName: \"kubernetes.io/projected/7b948744-9a86-4b73-bc52-f95e4b50c80e-kube-api-access-vjfp6\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.250999 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b948744-9a86-4b73-bc52-f95e4b50c80e-serving-cert\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.251065 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbf1855-e8ae-4358-9712-88f925636310-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.251083 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.251096 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj7gc\" (UniqueName: \"kubernetes.io/projected/7dbf1855-e8ae-4358-9712-88f925636310-kube-api-access-mj7gc\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.251107 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7dbf1855-e8ae-4358-9712-88f925636310-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.251786 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-client-ca" (OuterVolumeSpecName: "client-ca") pod "d39be8cf-0d88-454e-ada8-9c846f90f4b6" (UID: "d39be8cf-0d88-454e-ada8-9c846f90f4b6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.252152 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d39be8cf-0d88-454e-ada8-9c846f90f4b6" (UID: "d39be8cf-0d88-454e-ada8-9c846f90f4b6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.252255 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-config" (OuterVolumeSpecName: "config") pod "d39be8cf-0d88-454e-ada8-9c846f90f4b6" (UID: "d39be8cf-0d88-454e-ada8-9c846f90f4b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.255299 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39be8cf-0d88-454e-ada8-9c846f90f4b6-kube-api-access-wcqw5" (OuterVolumeSpecName: "kube-api-access-wcqw5") pod "d39be8cf-0d88-454e-ada8-9c846f90f4b6" (UID: "d39be8cf-0d88-454e-ada8-9c846f90f4b6"). InnerVolumeSpecName "kube-api-access-wcqw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.258062 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39be8cf-0d88-454e-ada8-9c846f90f4b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d39be8cf-0d88-454e-ada8-9c846f90f4b6" (UID: "d39be8cf-0d88-454e-ada8-9c846f90f4b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.352374 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjfp6\" (UniqueName: \"kubernetes.io/projected/7b948744-9a86-4b73-bc52-f95e4b50c80e-kube-api-access-vjfp6\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.352463 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b948744-9a86-4b73-bc52-f95e4b50c80e-serving-cert\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.352543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-config\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.352567 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-client-ca\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.352622 5017 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.352639 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39be8cf-0d88-454e-ada8-9c846f90f4b6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.352653 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.352668 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcqw5\" (UniqueName: \"kubernetes.io/projected/d39be8cf-0d88-454e-ada8-9c846f90f4b6-kube-api-access-wcqw5\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.352687 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39be8cf-0d88-454e-ada8-9c846f90f4b6-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.354181 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-client-ca\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.354291 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-config\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.358025 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b948744-9a86-4b73-bc52-f95e4b50c80e-serving-cert\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.372484 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjfp6\" (UniqueName: \"kubernetes.io/projected/7b948744-9a86-4b73-bc52-f95e4b50c80e-kube-api-access-vjfp6\") pod \"route-controller-manager-59b656664b-b9lws\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.530241 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.678506 5017 generic.go:334] "Generic (PLEG): container finished" podID="7dbf1855-e8ae-4358-9712-88f925636310" containerID="ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca" exitCode=0 Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.679069 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.681138 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" event={"ID":"7dbf1855-e8ae-4358-9712-88f925636310","Type":"ContainerDied","Data":"ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca"} Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.681215 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx" event={"ID":"7dbf1855-e8ae-4358-9712-88f925636310","Type":"ContainerDied","Data":"112b7b9729d5c76d825c2ea8f1ce1c9bf004abcdcc7c02d2bb4474342b5a0b67"} Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.681238 5017 scope.go:117] "RemoveContainer" containerID="ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.688841 5017 generic.go:334] "Generic (PLEG): container finished" podID="d39be8cf-0d88-454e-ada8-9c846f90f4b6" containerID="2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23" exitCode=0 Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.691747 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" event={"ID":"d39be8cf-0d88-454e-ada8-9c846f90f4b6","Type":"ContainerDied","Data":"2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23"} Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.691799 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" event={"ID":"d39be8cf-0d88-454e-ada8-9c846f90f4b6","Type":"ContainerDied","Data":"43919f189a8f6420b1bfc239bd518c95f51718362950c466ef1284dcf83e220b"} Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.691885 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78bc4cb977-llbq4" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.739563 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx"] Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.739822 5017 scope.go:117] "RemoveContainer" containerID="ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca" Jan 29 06:38:19 crc kubenswrapper[5017]: E0129 06:38:19.740653 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca\": container with ID starting with ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca not found: ID does not exist" containerID="ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.740690 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca"} err="failed to get container status \"ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca\": rpc error: code = NotFound desc = could not find container \"ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca\": container with ID starting with ca6c74bbc3b9f1f544184c880c3c4b0d69e79c6f51f86b717003e7ecaef508ca not found: ID does not exist" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.740754 5017 scope.go:117] "RemoveContainer" containerID="2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.795086 5017 scope.go:117] "RemoveContainer" containerID="2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23" Jan 29 06:38:19 crc kubenswrapper[5017]: E0129 06:38:19.798538 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23\": container with ID starting with 2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23 not found: ID does not exist" containerID="2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.798582 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23"} err="failed to get container status \"2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23\": rpc error: code = NotFound desc = could not find container \"2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23\": container with ID starting with 2866269d861499b6c5aca24846818cd0fd2865fda1f6ee963f076d8811e17a23 not found: ID does not exist" Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.800229 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c588587d7-ch2nx"] Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.805089 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-llbq4"] Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.807902 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78bc4cb977-llbq4"] Jan 29 06:38:19 crc kubenswrapper[5017]: I0129 06:38:19.843244 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws"] Jan 29 06:38:20 crc kubenswrapper[5017]: I0129 06:38:20.326697 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dbf1855-e8ae-4358-9712-88f925636310" path="/var/lib/kubelet/pods/7dbf1855-e8ae-4358-9712-88f925636310/volumes" Jan 29 06:38:20 crc kubenswrapper[5017]: I0129 06:38:20.328125 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39be8cf-0d88-454e-ada8-9c846f90f4b6" path="/var/lib/kubelet/pods/d39be8cf-0d88-454e-ada8-9c846f90f4b6/volumes" Jan 29 06:38:20 crc kubenswrapper[5017]: I0129 06:38:20.698194 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" event={"ID":"7b948744-9a86-4b73-bc52-f95e4b50c80e","Type":"ContainerStarted","Data":"f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f"} Jan 29 06:38:20 crc kubenswrapper[5017]: I0129 06:38:20.698261 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" event={"ID":"7b948744-9a86-4b73-bc52-f95e4b50c80e","Type":"ContainerStarted","Data":"40da1908ca2c8e1a168765a255fa75bde54c90840c50e4a2a51f69a28b6d6173"} Jan 29 06:38:20 crc kubenswrapper[5017]: I0129 06:38:20.698469 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:20 crc kubenswrapper[5017]: I0129 06:38:20.709597 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:20 crc kubenswrapper[5017]: I0129 06:38:20.723345 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" podStartSLOduration=3.7233274 podStartE2EDuration="3.7233274s" podCreationTimestamp="2026-01-29 06:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:20.722192227 +0000 UTC m=+187.096639867" watchObservedRunningTime="2026-01-29 06:38:20.7233274 +0000 UTC m=+187.097775010" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.866301 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n"] Jan 29 06:38:21 crc kubenswrapper[5017]: E0129 06:38:21.866685 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39be8cf-0d88-454e-ada8-9c846f90f4b6" containerName="controller-manager" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.866704 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39be8cf-0d88-454e-ada8-9c846f90f4b6" containerName="controller-manager" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.866872 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39be8cf-0d88-454e-ada8-9c846f90f4b6" containerName="controller-manager" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.867570 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.874890 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.874944 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.875071 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.875196 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.875533 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.875881 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.879236 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n"] Jan 29 06:38:21 crc kubenswrapper[5017]: I0129 06:38:21.881677 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.004303 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-client-ca\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.004368 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdcv4\" (UniqueName: \"kubernetes.io/projected/71667a10-cad6-4d77-8f39-7221f358fc86-kube-api-access-sdcv4\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.004400 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-proxy-ca-bundles\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.004608 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71667a10-cad6-4d77-8f39-7221f358fc86-serving-cert\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.004728 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-config\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.106387 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71667a10-cad6-4d77-8f39-7221f358fc86-serving-cert\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.106458 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-config\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.106520 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-client-ca\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.106547 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdcv4\" (UniqueName: \"kubernetes.io/projected/71667a10-cad6-4d77-8f39-7221f358fc86-kube-api-access-sdcv4\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.106576 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-proxy-ca-bundles\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.108259 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-proxy-ca-bundles\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.110624 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-config\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.110701 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-client-ca\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.115204 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71667a10-cad6-4d77-8f39-7221f358fc86-serving-cert\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.131466 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdcv4\" (UniqueName: \"kubernetes.io/projected/71667a10-cad6-4d77-8f39-7221f358fc86-kube-api-access-sdcv4\") pod \"controller-manager-5dc7fc5b5d-8h56n\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.190646 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.370497 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.639300 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n"] Jan 29 06:38:22 crc kubenswrapper[5017]: W0129 06:38:22.657564 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71667a10_cad6_4d77_8f39_7221f358fc86.slice/crio-0a3771c373a3d35fd07a4c1b82ccbbd4a544ce6626e99a16fcd463de64338c49 WatchSource:0}: Error finding container 0a3771c373a3d35fd07a4c1b82ccbbd4a544ce6626e99a16fcd463de64338c49: Status 404 returned error can't find the container with id 0a3771c373a3d35fd07a4c1b82ccbbd4a544ce6626e99a16fcd463de64338c49 Jan 29 06:38:22 crc kubenswrapper[5017]: I0129 06:38:22.713985 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" event={"ID":"71667a10-cad6-4d77-8f39-7221f358fc86","Type":"ContainerStarted","Data":"0a3771c373a3d35fd07a4c1b82ccbbd4a544ce6626e99a16fcd463de64338c49"} Jan 29 06:38:23 crc kubenswrapper[5017]: I0129 06:38:23.721041 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" event={"ID":"71667a10-cad6-4d77-8f39-7221f358fc86","Type":"ContainerStarted","Data":"197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229"} Jan 29 06:38:23 crc kubenswrapper[5017]: I0129 06:38:23.721487 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:23 crc kubenswrapper[5017]: I0129 06:38:23.725512 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:23 crc kubenswrapper[5017]: I0129 06:38:23.742290 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" podStartSLOduration=6.742267612 podStartE2EDuration="6.742267612s" podCreationTimestamp="2026-01-29 06:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:23.737624759 +0000 UTC m=+190.112072369" watchObservedRunningTime="2026-01-29 06:38:23.742267612 +0000 UTC m=+190.116715222" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.075151 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pws6m"] Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.551179 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.552050 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.554526 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.554872 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.561045 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.643030 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.643515 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.693939 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.694250 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.746160 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.746220 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.746363 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.774522 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.871005 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.887594 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:38:24 crc kubenswrapper[5017]: I0129 06:38:24.887642 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:38:25 crc kubenswrapper[5017]: I0129 06:38:25.099554 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 06:38:25 crc kubenswrapper[5017]: I0129 06:38:25.138884 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:38:25 crc kubenswrapper[5017]: I0129 06:38:25.139011 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:38:25 crc kubenswrapper[5017]: I0129 06:38:25.733778 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e4a2b28b-dbe5-420c-9df9-4b0237951d8b","Type":"ContainerStarted","Data":"a6178d13eaeffae1bd073d0c56af0400afd175b5119edd915e417d8c7bfc7366"} Jan 29 06:38:25 crc kubenswrapper[5017]: I0129 06:38:25.779430 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:38:25 crc kubenswrapper[5017]: I0129 06:38:25.798057 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:38:26 crc kubenswrapper[5017]: I0129 06:38:26.539462 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:38:26 crc kubenswrapper[5017]: I0129 06:38:26.539558 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:38:26 crc kubenswrapper[5017]: I0129 06:38:26.749120 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e4a2b28b-dbe5-420c-9df9-4b0237951d8b","Type":"ContainerStarted","Data":"a783ebea601d96ba8b5fd4956b2e4760428ee1c3c570876eb5c5eb1071ec14a3"} Jan 29 06:38:26 crc kubenswrapper[5017]: I0129 06:38:26.773498 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.773474978 podStartE2EDuration="2.773474978s" podCreationTimestamp="2026-01-29 06:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:26.770858352 +0000 UTC m=+193.145305992" watchObservedRunningTime="2026-01-29 06:38:26.773474978 +0000 UTC m=+193.147922588" Jan 29 06:38:27 crc kubenswrapper[5017]: I0129 06:38:27.153687 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lgqwh"] Jan 29 06:38:27 crc kubenswrapper[5017]: I0129 06:38:27.757070 5017 generic.go:334] "Generic (PLEG): container finished" podID="e4a2b28b-dbe5-420c-9df9-4b0237951d8b" containerID="a783ebea601d96ba8b5fd4956b2e4760428ee1c3c570876eb5c5eb1071ec14a3" exitCode=0 Jan 29 06:38:27 crc kubenswrapper[5017]: I0129 06:38:27.757316 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lgqwh" podUID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerName="registry-server" containerID="cri-o://52dcca43f80ae265bbf4bcebcc47fda0d53941739034da48bb3d28a2f0d75e2b" gracePeriod=2 Jan 29 06:38:27 crc kubenswrapper[5017]: I0129 06:38:27.757477 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e4a2b28b-dbe5-420c-9df9-4b0237951d8b","Type":"ContainerDied","Data":"a783ebea601d96ba8b5fd4956b2e4760428ee1c3c570876eb5c5eb1071ec14a3"} Jan 29 06:38:27 crc kubenswrapper[5017]: I0129 06:38:27.959810 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:38:27 crc kubenswrapper[5017]: I0129 06:38:27.960485 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:38:28 crc kubenswrapper[5017]: I0129 06:38:28.005138 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:38:28 crc kubenswrapper[5017]: I0129 06:38:28.156576 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnzhc"] Jan 29 06:38:28 crc kubenswrapper[5017]: I0129 06:38:28.156855 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rnzhc" podUID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerName="registry-server" containerID="cri-o://bac0a4d113a6b0bcd16f61fdd85a3b8def383761a73640cafcaeb5c96bcef7ad" gracePeriod=2 Jan 29 06:38:28 crc kubenswrapper[5017]: I0129 06:38:28.778904 5017 generic.go:334] "Generic (PLEG): container finished" podID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerID="bac0a4d113a6b0bcd16f61fdd85a3b8def383761a73640cafcaeb5c96bcef7ad" exitCode=0 Jan 29 06:38:28 crc kubenswrapper[5017]: I0129 06:38:28.779008 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnzhc" event={"ID":"3deabecf-5a0e-454c-a622-b3b422c3a6bb","Type":"ContainerDied","Data":"bac0a4d113a6b0bcd16f61fdd85a3b8def383761a73640cafcaeb5c96bcef7ad"} Jan 29 06:38:28 crc kubenswrapper[5017]: I0129 06:38:28.782786 5017 generic.go:334] "Generic (PLEG): container finished" podID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerID="52dcca43f80ae265bbf4bcebcc47fda0d53941739034da48bb3d28a2f0d75e2b" exitCode=0 Jan 29 06:38:28 crc kubenswrapper[5017]: I0129 06:38:28.783047 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgqwh" event={"ID":"f35dd9e9-ca07-483a-a5bf-2179d9d705c6","Type":"ContainerDied","Data":"52dcca43f80ae265bbf4bcebcc47fda0d53941739034da48bb3d28a2f0d75e2b"} Jan 29 06:38:28 crc kubenswrapper[5017]: I0129 06:38:28.855887 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.061250 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.216428 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-utilities\") pod \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.216468 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-catalog-content\") pod \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.216542 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmbk7\" (UniqueName: \"kubernetes.io/projected/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-kube-api-access-dmbk7\") pod \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\" (UID: \"f35dd9e9-ca07-483a-a5bf-2179d9d705c6\") " Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.217494 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-utilities" (OuterVolumeSpecName: "utilities") pod "f35dd9e9-ca07-483a-a5bf-2179d9d705c6" (UID: "f35dd9e9-ca07-483a-a5bf-2179d9d705c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.226308 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-kube-api-access-dmbk7" (OuterVolumeSpecName: "kube-api-access-dmbk7") pod "f35dd9e9-ca07-483a-a5bf-2179d9d705c6" (UID: "f35dd9e9-ca07-483a-a5bf-2179d9d705c6"). InnerVolumeSpecName "kube-api-access-dmbk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.233648 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.302841 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f35dd9e9-ca07-483a-a5bf-2179d9d705c6" (UID: "f35dd9e9-ca07-483a-a5bf-2179d9d705c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.317640 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.317665 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.317677 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmbk7\" (UniqueName: \"kubernetes.io/projected/f35dd9e9-ca07-483a-a5bf-2179d9d705c6-kube-api-access-dmbk7\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.391544 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.418213 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kubelet-dir\") pod \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\" (UID: \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\") " Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.418289 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kube-api-access\") pod \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\" (UID: \"e4a2b28b-dbe5-420c-9df9-4b0237951d8b\") " Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.418330 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e4a2b28b-dbe5-420c-9df9-4b0237951d8b" (UID: "e4a2b28b-dbe5-420c-9df9-4b0237951d8b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.418645 5017 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.424654 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e4a2b28b-dbe5-420c-9df9-4b0237951d8b" (UID: "e4a2b28b-dbe5-420c-9df9-4b0237951d8b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.519289 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-catalog-content\") pod \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.519881 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfth9\" (UniqueName: \"kubernetes.io/projected/3deabecf-5a0e-454c-a622-b3b422c3a6bb-kube-api-access-dfth9\") pod \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.520020 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-utilities\") pod \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\" (UID: \"3deabecf-5a0e-454c-a622-b3b422c3a6bb\") " Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.520359 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4a2b28b-dbe5-420c-9df9-4b0237951d8b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.520599 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-utilities" (OuterVolumeSpecName: "utilities") pod "3deabecf-5a0e-454c-a622-b3b422c3a6bb" (UID: "3deabecf-5a0e-454c-a622-b3b422c3a6bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.524044 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3deabecf-5a0e-454c-a622-b3b422c3a6bb-kube-api-access-dfth9" (OuterVolumeSpecName: "kube-api-access-dfth9") pod "3deabecf-5a0e-454c-a622-b3b422c3a6bb" (UID: "3deabecf-5a0e-454c-a622-b3b422c3a6bb"). InnerVolumeSpecName "kube-api-access-dfth9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.587064 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3deabecf-5a0e-454c-a622-b3b422c3a6bb" (UID: "3deabecf-5a0e-454c-a622-b3b422c3a6bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.622618 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.622659 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfth9\" (UniqueName: \"kubernetes.io/projected/3deabecf-5a0e-454c-a622-b3b422c3a6bb-kube-api-access-dfth9\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.622674 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3deabecf-5a0e-454c-a622-b3b422c3a6bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.790380 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cqwl" event={"ID":"d8f46fb7-929f-4d96-a5ca-4fc475b78342","Type":"ContainerStarted","Data":"900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10"} Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.792669 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgqwh" event={"ID":"f35dd9e9-ca07-483a-a5bf-2179d9d705c6","Type":"ContainerDied","Data":"cd1f7e90fc6c75952b343e6f8a71e4ea4efbb63818a410cc81377fd90f501065"} Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.792825 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgqwh" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.795702 5017 scope.go:117] "RemoveContainer" containerID="52dcca43f80ae265bbf4bcebcc47fda0d53941739034da48bb3d28a2f0d75e2b" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.797002 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e4a2b28b-dbe5-420c-9df9-4b0237951d8b","Type":"ContainerDied","Data":"a6178d13eaeffae1bd073d0c56af0400afd175b5119edd915e417d8c7bfc7366"} Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.797054 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6178d13eaeffae1bd073d0c56af0400afd175b5119edd915e417d8c7bfc7366" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.798320 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.802145 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnzhc" event={"ID":"3deabecf-5a0e-454c-a622-b3b422c3a6bb","Type":"ContainerDied","Data":"79c574cacd5409f1ceea322251b8c5230cc0f79b4c896f928306ee74fdfb3955"} Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.802688 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnzhc" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.803801 5017 generic.go:334] "Generic (PLEG): container finished" podID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerID="57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108" exitCode=0 Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.803870 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcr4n" event={"ID":"402aa844-38d7-44aa-bfa8-8db490d3aa4b","Type":"ContainerDied","Data":"57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108"} Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.813236 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l5h7" event={"ID":"a83caf08-35b0-460b-ba5e-1db0c6cab902","Type":"ContainerStarted","Data":"84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf"} Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.814828 5017 scope.go:117] "RemoveContainer" containerID="7503daa34c084e53086b265e61a6f43b1d863854c6e943d90f8a248a6f0c5984" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.841051 5017 scope.go:117] "RemoveContainer" containerID="d227d4631e82a4a8a838caa548ce81a63fd4d91077ffe67d3c58919820fdd898" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.858566 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnzhc"] Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.863227 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rnzhc"] Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.864380 5017 scope.go:117] "RemoveContainer" containerID="bac0a4d113a6b0bcd16f61fdd85a3b8def383761a73640cafcaeb5c96bcef7ad" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.889903 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lgqwh"] Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.893832 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lgqwh"] Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.895499 5017 scope.go:117] "RemoveContainer" containerID="f273d5d72547b80f864bb2a640f8ae5005bae48e30db37e7065e11ec6143facd" Jan 29 06:38:29 crc kubenswrapper[5017]: I0129 06:38:29.910758 5017 scope.go:117] "RemoveContainer" containerID="e4719b5859fe2d77193f7a99b8cf52f589094058414e4305256890882e2bbb1d" Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.327157 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" path="/var/lib/kubelet/pods/3deabecf-5a0e-454c-a622-b3b422c3a6bb/volumes" Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.327857 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" path="/var/lib/kubelet/pods/f35dd9e9-ca07-483a-a5bf-2179d9d705c6/volumes" Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.823567 5017 generic.go:334] "Generic (PLEG): container finished" podID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerID="900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10" exitCode=0 Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.823688 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cqwl" event={"ID":"d8f46fb7-929f-4d96-a5ca-4fc475b78342","Type":"ContainerDied","Data":"900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10"} Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.830259 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcr4n" event={"ID":"402aa844-38d7-44aa-bfa8-8db490d3aa4b","Type":"ContainerStarted","Data":"2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301"} Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.831880 5017 generic.go:334] "Generic (PLEG): container finished" podID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerID="84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf" exitCode=0 Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.831933 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l5h7" event={"ID":"a83caf08-35b0-460b-ba5e-1db0c6cab902","Type":"ContainerDied","Data":"84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf"} Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.835024 5017 generic.go:334] "Generic (PLEG): container finished" podID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerID="dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050" exitCode=0 Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.835047 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-969vd" event={"ID":"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2","Type":"ContainerDied","Data":"dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050"} Jan 29 06:38:30 crc kubenswrapper[5017]: I0129 06:38:30.900627 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zcr4n" podStartSLOduration=2.608857907 podStartE2EDuration="45.900608032s" podCreationTimestamp="2026-01-29 06:37:45 +0000 UTC" firstStartedPulling="2026-01-29 06:37:46.979810376 +0000 UTC m=+153.354257986" lastFinishedPulling="2026-01-29 06:38:30.271560501 +0000 UTC m=+196.646008111" observedRunningTime="2026-01-29 06:38:30.898786879 +0000 UTC m=+197.273234499" watchObservedRunningTime="2026-01-29 06:38:30.900608032 +0000 UTC m=+197.275055642" Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.553982 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ljw2"] Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.554946 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4ljw2" podUID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerName="registry-server" containerID="cri-o://34d823b1b9240527cbfbd56ab5eb7f8fe49c9a3a3d383676bb9c45ad8460b537" gracePeriod=2 Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.847045 5017 generic.go:334] "Generic (PLEG): container finished" podID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerID="34d823b1b9240527cbfbd56ab5eb7f8fe49c9a3a3d383676bb9c45ad8460b537" exitCode=0 Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.847141 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljw2" event={"ID":"9749d75a-01c6-42c9-a642-8c51895c9cbf","Type":"ContainerDied","Data":"34d823b1b9240527cbfbd56ab5eb7f8fe49c9a3a3d383676bb9c45ad8460b537"} Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.849840 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-969vd" event={"ID":"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2","Type":"ContainerStarted","Data":"16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d"} Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.853176 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cqwl" event={"ID":"d8f46fb7-929f-4d96-a5ca-4fc475b78342","Type":"ContainerStarted","Data":"ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613"} Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.855134 5017 generic.go:334] "Generic (PLEG): container finished" podID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerID="a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a" exitCode=0 Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.855222 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxrzm" event={"ID":"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304","Type":"ContainerDied","Data":"a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a"} Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.861329 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l5h7" event={"ID":"a83caf08-35b0-460b-ba5e-1db0c6cab902","Type":"ContainerStarted","Data":"b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe"} Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.869160 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-969vd" podStartSLOduration=3.483042783 podStartE2EDuration="48.869147549s" podCreationTimestamp="2026-01-29 06:37:43 +0000 UTC" firstStartedPulling="2026-01-29 06:37:45.953353146 +0000 UTC m=+152.327800756" lastFinishedPulling="2026-01-29 06:38:31.339457912 +0000 UTC m=+197.713905522" observedRunningTime="2026-01-29 06:38:31.868463928 +0000 UTC m=+198.242911538" watchObservedRunningTime="2026-01-29 06:38:31.869147549 +0000 UTC m=+198.243595159" Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.923887 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8l5h7" podStartSLOduration=3.806078475 podStartE2EDuration="45.923870396s" podCreationTimestamp="2026-01-29 06:37:46 +0000 UTC" firstStartedPulling="2026-01-29 06:37:49.182946918 +0000 UTC m=+155.557394528" lastFinishedPulling="2026-01-29 06:38:31.300738839 +0000 UTC m=+197.675186449" observedRunningTime="2026-01-29 06:38:31.920077013 +0000 UTC m=+198.294524633" watchObservedRunningTime="2026-01-29 06:38:31.923870396 +0000 UTC m=+198.298317996" Jan 29 06:38:31 crc kubenswrapper[5017]: I0129 06:38:31.924572 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2cqwl" podStartSLOduration=2.8919582139999997 podStartE2EDuration="44.924566446s" podCreationTimestamp="2026-01-29 06:37:47 +0000 UTC" firstStartedPulling="2026-01-29 06:37:49.190020091 +0000 UTC m=+155.564467701" lastFinishedPulling="2026-01-29 06:38:31.222628283 +0000 UTC m=+197.597075933" observedRunningTime="2026-01-29 06:38:31.901614728 +0000 UTC m=+198.276062338" watchObservedRunningTime="2026-01-29 06:38:31.924566446 +0000 UTC m=+198.299014056" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.068760 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.156268 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5z6k\" (UniqueName: \"kubernetes.io/projected/9749d75a-01c6-42c9-a642-8c51895c9cbf-kube-api-access-z5z6k\") pod \"9749d75a-01c6-42c9-a642-8c51895c9cbf\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.156408 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-catalog-content\") pod \"9749d75a-01c6-42c9-a642-8c51895c9cbf\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.156505 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-utilities\") pod \"9749d75a-01c6-42c9-a642-8c51895c9cbf\" (UID: \"9749d75a-01c6-42c9-a642-8c51895c9cbf\") " Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.157436 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-utilities" (OuterVolumeSpecName: "utilities") pod "9749d75a-01c6-42c9-a642-8c51895c9cbf" (UID: "9749d75a-01c6-42c9-a642-8c51895c9cbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.164692 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9749d75a-01c6-42c9-a642-8c51895c9cbf-kube-api-access-z5z6k" (OuterVolumeSpecName: "kube-api-access-z5z6k") pod "9749d75a-01c6-42c9-a642-8c51895c9cbf" (UID: "9749d75a-01c6-42c9-a642-8c51895c9cbf"). InnerVolumeSpecName "kube-api-access-z5z6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.257582 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5z6k\" (UniqueName: \"kubernetes.io/projected/9749d75a-01c6-42c9-a642-8c51895c9cbf-kube-api-access-z5z6k\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.257621 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.302725 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9749d75a-01c6-42c9-a642-8c51895c9cbf" (UID: "9749d75a-01c6-42c9-a642-8c51895c9cbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.358880 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9749d75a-01c6-42c9-a642-8c51895c9cbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.870144 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxrzm" event={"ID":"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304","Type":"ContainerStarted","Data":"f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37"} Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.874080 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ljw2" event={"ID":"9749d75a-01c6-42c9-a642-8c51895c9cbf","Type":"ContainerDied","Data":"c4a0569b92481e287993ad4bab3712074e37ef86550ba852879c4a29f6896375"} Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.874122 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ljw2" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.874158 5017 scope.go:117] "RemoveContainer" containerID="34d823b1b9240527cbfbd56ab5eb7f8fe49c9a3a3d383676bb9c45ad8460b537" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.893859 5017 scope.go:117] "RemoveContainer" containerID="7bf1a17713c7ce6bec4714e5b284f357fc8be0bdbbfd9487714071dc4b036686" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.896875 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jxrzm" podStartSLOduration=2.537753843 podStartE2EDuration="48.896862893s" podCreationTimestamp="2026-01-29 06:37:44 +0000 UTC" firstStartedPulling="2026-01-29 06:37:45.927567415 +0000 UTC m=+152.302015025" lastFinishedPulling="2026-01-29 06:38:32.286676465 +0000 UTC m=+198.661124075" observedRunningTime="2026-01-29 06:38:32.894562095 +0000 UTC m=+199.269009725" watchObservedRunningTime="2026-01-29 06:38:32.896862893 +0000 UTC m=+199.271310503" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.913047 5017 scope.go:117] "RemoveContainer" containerID="b6ed2b77f1234f13601058ef29bda01ffe04a830f5d71be8fe6d27b681c6175e" Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.922543 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ljw2"] Jan 29 06:38:32 crc kubenswrapper[5017]: I0129 06:38:32.927160 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4ljw2"] Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.348909 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349143 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerName="registry-server" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349157 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerName="registry-server" Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349165 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerName="registry-server" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349172 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerName="registry-server" Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349187 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerName="extract-utilities" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349196 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerName="extract-utilities" Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349202 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerName="extract-content" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349207 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerName="extract-content" Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349218 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerName="extract-utilities" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349223 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerName="extract-utilities" Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349230 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerName="extract-utilities" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349237 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerName="extract-utilities" Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349245 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a2b28b-dbe5-420c-9df9-4b0237951d8b" containerName="pruner" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349251 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a2b28b-dbe5-420c-9df9-4b0237951d8b" containerName="pruner" Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349258 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerName="registry-server" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349264 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerName="registry-server" Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349272 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerName="extract-content" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349279 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerName="extract-content" Jan 29 06:38:33 crc kubenswrapper[5017]: E0129 06:38:33.349285 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerName="extract-content" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349291 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerName="extract-content" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349383 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3deabecf-5a0e-454c-a622-b3b422c3a6bb" containerName="registry-server" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349393 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9749d75a-01c6-42c9-a642-8c51895c9cbf" containerName="registry-server" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349407 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35dd9e9-ca07-483a-a5bf-2179d9d705c6" containerName="registry-server" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349415 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a2b28b-dbe5-420c-9df9-4b0237951d8b" containerName="pruner" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.349818 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.354542 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.356050 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.370866 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.476294 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-var-lock\") pod \"installer-9-crc\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.476362 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.476407 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50301307-73f6-4035-92d0-e96ac1dcb9b9-kube-api-access\") pod \"installer-9-crc\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.578242 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-var-lock\") pod \"installer-9-crc\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.578291 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.578329 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50301307-73f6-4035-92d0-e96ac1dcb9b9-kube-api-access\") pod \"installer-9-crc\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.578393 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-var-lock\") pod \"installer-9-crc\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.578485 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.596301 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50301307-73f6-4035-92d0-e96ac1dcb9b9-kube-api-access\") pod \"installer-9-crc\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:33 crc kubenswrapper[5017]: I0129 06:38:33.665912 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.129596 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.303760 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-969vd" Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.304434 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-969vd" Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.331560 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9749d75a-01c6-42c9-a642-8c51895c9cbf" path="/var/lib/kubelet/pods/9749d75a-01c6-42c9-a642-8c51895c9cbf/volumes" Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.373034 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-969vd" Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.473908 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.473977 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.521915 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.900112 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"50301307-73f6-4035-92d0-e96ac1dcb9b9","Type":"ContainerStarted","Data":"9bb60e6e926b5a1364b194c833406e58255f0892ab95ebce24f26e56b9cfeb7b"} Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.900619 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"50301307-73f6-4035-92d0-e96ac1dcb9b9","Type":"ContainerStarted","Data":"12822ef1008da815fb732bc32f98110317d583f7edb3bd77d1dfc2c91f832791"} Jan 29 06:38:34 crc kubenswrapper[5017]: I0129 06:38:34.920590 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.920563133 podStartE2EDuration="1.920563133s" podCreationTimestamp="2026-01-29 06:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:34.920063818 +0000 UTC m=+201.294511458" watchObservedRunningTime="2026-01-29 06:38:34.920563133 +0000 UTC m=+201.295010753" Jan 29 06:38:36 crc kubenswrapper[5017]: I0129 06:38:36.273345 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:38:36 crc kubenswrapper[5017]: I0129 06:38:36.273722 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:38:36 crc kubenswrapper[5017]: I0129 06:38:36.336644 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:38:36 crc kubenswrapper[5017]: I0129 06:38:36.765310 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:38:36 crc kubenswrapper[5017]: I0129 06:38:36.765367 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:38:36 crc kubenswrapper[5017]: I0129 06:38:36.818654 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:38:36 crc kubenswrapper[5017]: I0129 06:38:36.958360 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:38:36 crc kubenswrapper[5017]: I0129 06:38:36.968456 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.215268 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n"] Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.216037 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" podUID="71667a10-cad6-4d77-8f39-7221f358fc86" containerName="controller-manager" containerID="cri-o://197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229" gracePeriod=30 Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.228066 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws"] Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.228328 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" podUID="7b948744-9a86-4b73-bc52-f95e4b50c80e" containerName="route-controller-manager" containerID="cri-o://f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f" gracePeriod=30 Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.524823 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.525844 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.756206 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.803230 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.842775 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-client-ca\") pod \"7b948744-9a86-4b73-bc52-f95e4b50c80e\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.842889 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjfp6\" (UniqueName: \"kubernetes.io/projected/7b948744-9a86-4b73-bc52-f95e4b50c80e-kube-api-access-vjfp6\") pod \"7b948744-9a86-4b73-bc52-f95e4b50c80e\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.842934 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b948744-9a86-4b73-bc52-f95e4b50c80e-serving-cert\") pod \"7b948744-9a86-4b73-bc52-f95e4b50c80e\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.843118 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-config\") pod \"7b948744-9a86-4b73-bc52-f95e4b50c80e\" (UID: \"7b948744-9a86-4b73-bc52-f95e4b50c80e\") " Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.843797 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-client-ca" (OuterVolumeSpecName: "client-ca") pod "7b948744-9a86-4b73-bc52-f95e4b50c80e" (UID: "7b948744-9a86-4b73-bc52-f95e4b50c80e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.843940 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-config" (OuterVolumeSpecName: "config") pod "7b948744-9a86-4b73-bc52-f95e4b50c80e" (UID: "7b948744-9a86-4b73-bc52-f95e4b50c80e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.849233 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b948744-9a86-4b73-bc52-f95e4b50c80e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7b948744-9a86-4b73-bc52-f95e4b50c80e" (UID: "7b948744-9a86-4b73-bc52-f95e4b50c80e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.849562 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b948744-9a86-4b73-bc52-f95e4b50c80e-kube-api-access-vjfp6" (OuterVolumeSpecName: "kube-api-access-vjfp6") pod "7b948744-9a86-4b73-bc52-f95e4b50c80e" (UID: "7b948744-9a86-4b73-bc52-f95e4b50c80e"). InnerVolumeSpecName "kube-api-access-vjfp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.924619 5017 generic.go:334] "Generic (PLEG): container finished" podID="71667a10-cad6-4d77-8f39-7221f358fc86" containerID="197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229" exitCode=0 Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.924702 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" event={"ID":"71667a10-cad6-4d77-8f39-7221f358fc86","Type":"ContainerDied","Data":"197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229"} Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.924705 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.924731 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n" event={"ID":"71667a10-cad6-4d77-8f39-7221f358fc86","Type":"ContainerDied","Data":"0a3771c373a3d35fd07a4c1b82ccbbd4a544ce6626e99a16fcd463de64338c49"} Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.924750 5017 scope.go:117] "RemoveContainer" containerID="197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.928122 5017 generic.go:334] "Generic (PLEG): container finished" podID="7b948744-9a86-4b73-bc52-f95e4b50c80e" containerID="f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f" exitCode=0 Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.928202 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.928277 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" event={"ID":"7b948744-9a86-4b73-bc52-f95e4b50c80e","Type":"ContainerDied","Data":"f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f"} Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.928300 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws" event={"ID":"7b948744-9a86-4b73-bc52-f95e4b50c80e","Type":"ContainerDied","Data":"40da1908ca2c8e1a168765a255fa75bde54c90840c50e4a2a51f69a28b6d6173"} Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.943972 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-proxy-ca-bundles\") pod \"71667a10-cad6-4d77-8f39-7221f358fc86\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.944025 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-client-ca\") pod \"71667a10-cad6-4d77-8f39-7221f358fc86\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.944075 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71667a10-cad6-4d77-8f39-7221f358fc86-serving-cert\") pod \"71667a10-cad6-4d77-8f39-7221f358fc86\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.944211 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdcv4\" (UniqueName: \"kubernetes.io/projected/71667a10-cad6-4d77-8f39-7221f358fc86-kube-api-access-sdcv4\") pod \"71667a10-cad6-4d77-8f39-7221f358fc86\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.944251 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-config\") pod \"71667a10-cad6-4d77-8f39-7221f358fc86\" (UID: \"71667a10-cad6-4d77-8f39-7221f358fc86\") " Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.944469 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.944481 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b948744-9a86-4b73-bc52-f95e4b50c80e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.944493 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjfp6\" (UniqueName: \"kubernetes.io/projected/7b948744-9a86-4b73-bc52-f95e4b50c80e-kube-api-access-vjfp6\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.944504 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b948744-9a86-4b73-bc52-f95e4b50c80e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.945210 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "71667a10-cad6-4d77-8f39-7221f358fc86" (UID: "71667a10-cad6-4d77-8f39-7221f358fc86"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.945249 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-client-ca" (OuterVolumeSpecName: "client-ca") pod "71667a10-cad6-4d77-8f39-7221f358fc86" (UID: "71667a10-cad6-4d77-8f39-7221f358fc86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.945886 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-config" (OuterVolumeSpecName: "config") pod "71667a10-cad6-4d77-8f39-7221f358fc86" (UID: "71667a10-cad6-4d77-8f39-7221f358fc86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.948370 5017 scope.go:117] "RemoveContainer" containerID="197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.948435 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71667a10-cad6-4d77-8f39-7221f358fc86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "71667a10-cad6-4d77-8f39-7221f358fc86" (UID: "71667a10-cad6-4d77-8f39-7221f358fc86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:37 crc kubenswrapper[5017]: E0129 06:38:37.949626 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229\": container with ID starting with 197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229 not found: ID does not exist" containerID="197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.949695 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229"} err="failed to get container status \"197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229\": rpc error: code = NotFound desc = could not find container \"197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229\": container with ID starting with 197429be8be6d99463e65644302b88e33df59886b864bdbe1dc4c3abff502229 not found: ID does not exist" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.949735 5017 scope.go:117] "RemoveContainer" containerID="f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.956616 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71667a10-cad6-4d77-8f39-7221f358fc86-kube-api-access-sdcv4" (OuterVolumeSpecName: "kube-api-access-sdcv4") pod "71667a10-cad6-4d77-8f39-7221f358fc86" (UID: "71667a10-cad6-4d77-8f39-7221f358fc86"). InnerVolumeSpecName "kube-api-access-sdcv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.958323 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws"] Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.960866 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b656664b-b9lws"] Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.968212 5017 scope.go:117] "RemoveContainer" containerID="f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f" Jan 29 06:38:37 crc kubenswrapper[5017]: E0129 06:38:37.968664 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f\": container with ID starting with f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f not found: ID does not exist" containerID="f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f" Jan 29 06:38:37 crc kubenswrapper[5017]: I0129 06:38:37.968718 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f"} err="failed to get container status \"f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f\": rpc error: code = NotFound desc = could not find container \"f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f\": container with ID starting with f447e30a574151e5ab596f108e348d94567b20af938bc3d6fa84bf1306fa384f not found: ID does not exist" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.046085 5017 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.046139 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.046151 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71667a10-cad6-4d77-8f39-7221f358fc86-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.046162 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdcv4\" (UniqueName: \"kubernetes.io/projected/71667a10-cad6-4d77-8f39-7221f358fc86-kube-api-access-sdcv4\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.046177 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71667a10-cad6-4d77-8f39-7221f358fc86-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.258055 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n"] Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.262993 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5dc7fc5b5d-8h56n"] Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.328220 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71667a10-cad6-4d77-8f39-7221f358fc86" path="/var/lib/kubelet/pods/71667a10-cad6-4d77-8f39-7221f358fc86/volumes" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.329335 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b948744-9a86-4b73-bc52-f95e4b50c80e" path="/var/lib/kubelet/pods/7b948744-9a86-4b73-bc52-f95e4b50c80e/volumes" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.578401 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2cqwl" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerName="registry-server" probeResult="failure" output=< Jan 29 06:38:38 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 06:38:38 crc kubenswrapper[5017]: > Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.879383 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8b898d5f7-bhj6k"] Jan 29 06:38:38 crc kubenswrapper[5017]: E0129 06:38:38.880159 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71667a10-cad6-4d77-8f39-7221f358fc86" containerName="controller-manager" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.880178 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="71667a10-cad6-4d77-8f39-7221f358fc86" containerName="controller-manager" Jan 29 06:38:38 crc kubenswrapper[5017]: E0129 06:38:38.880204 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b948744-9a86-4b73-bc52-f95e4b50c80e" containerName="route-controller-manager" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.880212 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b948744-9a86-4b73-bc52-f95e4b50c80e" containerName="route-controller-manager" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.880376 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="71667a10-cad6-4d77-8f39-7221f358fc86" containerName="controller-manager" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.880403 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b948744-9a86-4b73-bc52-f95e4b50c80e" containerName="route-controller-manager" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.881127 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.883700 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj"] Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.885224 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.886530 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.888046 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.888520 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.888617 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.888728 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.888793 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.890115 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.891533 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.893413 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b898d5f7-bhj6k"] Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.892439 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.892638 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.893479 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.893544 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.895186 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj"] Jan 29 06:38:38 crc kubenswrapper[5017]: I0129 06:38:38.896130 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.061908 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-client-ca\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.062046 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g54sf\" (UniqueName: \"kubernetes.io/projected/19513823-f10b-42e2-a8ea-9d3d423bd0cf-kube-api-access-g54sf\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.062074 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j86rk\" (UniqueName: \"kubernetes.io/projected/cbf7b775-959f-433d-9d7a-748f1004bfc1-kube-api-access-j86rk\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.062123 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19513823-f10b-42e2-a8ea-9d3d423bd0cf-serving-cert\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.062191 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-config\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.062257 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-config\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.062279 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-proxy-ca-bundles\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.062299 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-client-ca\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.062337 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf7b775-959f-433d-9d7a-748f1004bfc1-serving-cert\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.155484 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l5h7"] Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.156001 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8l5h7" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerName="registry-server" containerID="cri-o://b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe" gracePeriod=2 Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.163894 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-config\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.163994 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-config\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.164058 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-proxy-ca-bundles\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.164081 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-client-ca\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.164098 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf7b775-959f-433d-9d7a-748f1004bfc1-serving-cert\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.164122 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-client-ca\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.164170 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g54sf\" (UniqueName: \"kubernetes.io/projected/19513823-f10b-42e2-a8ea-9d3d423bd0cf-kube-api-access-g54sf\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.164190 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j86rk\" (UniqueName: \"kubernetes.io/projected/cbf7b775-959f-433d-9d7a-748f1004bfc1-kube-api-access-j86rk\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.164206 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19513823-f10b-42e2-a8ea-9d3d423bd0cf-serving-cert\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.165808 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-client-ca\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.166094 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-config\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.166463 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-config\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.166474 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-client-ca\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.166641 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-proxy-ca-bundles\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.174997 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19513823-f10b-42e2-a8ea-9d3d423bd0cf-serving-cert\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.175530 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf7b775-959f-433d-9d7a-748f1004bfc1-serving-cert\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.183299 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g54sf\" (UniqueName: \"kubernetes.io/projected/19513823-f10b-42e2-a8ea-9d3d423bd0cf-kube-api-access-g54sf\") pod \"route-controller-manager-6487b66959-hhndj\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.190293 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j86rk\" (UniqueName: \"kubernetes.io/projected/cbf7b775-959f-433d-9d7a-748f1004bfc1-kube-api-access-j86rk\") pod \"controller-manager-8b898d5f7-bhj6k\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.220274 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.239001 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.534202 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b898d5f7-bhj6k"] Jan 29 06:38:39 crc kubenswrapper[5017]: W0129 06:38:39.545460 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf7b775_959f_433d_9d7a_748f1004bfc1.slice/crio-d9cbcaec563e81ba005af4e92d6f95c2b4513b87cb453fa27e7c97b28c305212 WatchSource:0}: Error finding container d9cbcaec563e81ba005af4e92d6f95c2b4513b87cb453fa27e7c97b28c305212: Status 404 returned error can't find the container with id d9cbcaec563e81ba005af4e92d6f95c2b4513b87cb453fa27e7c97b28c305212 Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.603417 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.662442 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj"] Jan 29 06:38:39 crc kubenswrapper[5017]: W0129 06:38:39.667100 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19513823_f10b_42e2_a8ea_9d3d423bd0cf.slice/crio-5c9bd524fb9e64b66faa0accad9fcd141c08270ea5138c3919fe65e343a58e46 WatchSource:0}: Error finding container 5c9bd524fb9e64b66faa0accad9fcd141c08270ea5138c3919fe65e343a58e46: Status 404 returned error can't find the container with id 5c9bd524fb9e64b66faa0accad9fcd141c08270ea5138c3919fe65e343a58e46 Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.773513 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrctg\" (UniqueName: \"kubernetes.io/projected/a83caf08-35b0-460b-ba5e-1db0c6cab902-kube-api-access-hrctg\") pod \"a83caf08-35b0-460b-ba5e-1db0c6cab902\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.774068 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-utilities\") pod \"a83caf08-35b0-460b-ba5e-1db0c6cab902\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.774135 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-catalog-content\") pod \"a83caf08-35b0-460b-ba5e-1db0c6cab902\" (UID: \"a83caf08-35b0-460b-ba5e-1db0c6cab902\") " Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.775291 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-utilities" (OuterVolumeSpecName: "utilities") pod "a83caf08-35b0-460b-ba5e-1db0c6cab902" (UID: "a83caf08-35b0-460b-ba5e-1db0c6cab902"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.780130 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83caf08-35b0-460b-ba5e-1db0c6cab902-kube-api-access-hrctg" (OuterVolumeSpecName: "kube-api-access-hrctg") pod "a83caf08-35b0-460b-ba5e-1db0c6cab902" (UID: "a83caf08-35b0-460b-ba5e-1db0c6cab902"). InnerVolumeSpecName "kube-api-access-hrctg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.805374 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a83caf08-35b0-460b-ba5e-1db0c6cab902" (UID: "a83caf08-35b0-460b-ba5e-1db0c6cab902"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.875884 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.875931 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrctg\" (UniqueName: \"kubernetes.io/projected/a83caf08-35b0-460b-ba5e-1db0c6cab902-kube-api-access-hrctg\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.875948 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a83caf08-35b0-460b-ba5e-1db0c6cab902-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.944907 5017 generic.go:334] "Generic (PLEG): container finished" podID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerID="b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe" exitCode=0 Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.944992 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l5h7" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.945026 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l5h7" event={"ID":"a83caf08-35b0-460b-ba5e-1db0c6cab902","Type":"ContainerDied","Data":"b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe"} Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.945150 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l5h7" event={"ID":"a83caf08-35b0-460b-ba5e-1db0c6cab902","Type":"ContainerDied","Data":"2b9100cb26692555c45b78da1ebcf6a0ff0fea60067691821b157d31b026e7a0"} Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.945181 5017 scope.go:117] "RemoveContainer" containerID="b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.947642 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" event={"ID":"cbf7b775-959f-433d-9d7a-748f1004bfc1","Type":"ContainerStarted","Data":"63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802"} Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.947707 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" event={"ID":"cbf7b775-959f-433d-9d7a-748f1004bfc1","Type":"ContainerStarted","Data":"d9cbcaec563e81ba005af4e92d6f95c2b4513b87cb453fa27e7c97b28c305212"} Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.948048 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.955175 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" event={"ID":"19513823-f10b-42e2-a8ea-9d3d423bd0cf","Type":"ContainerStarted","Data":"f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd"} Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.955228 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" event={"ID":"19513823-f10b-42e2-a8ea-9d3d423bd0cf","Type":"ContainerStarted","Data":"5c9bd524fb9e64b66faa0accad9fcd141c08270ea5138c3919fe65e343a58e46"} Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.955755 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.957916 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.961477 5017 scope.go:117] "RemoveContainer" containerID="84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf" Jan 29 06:38:39 crc kubenswrapper[5017]: I0129 06:38:39.983862 5017 scope.go:117] "RemoveContainer" containerID="c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.018037 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" podStartSLOduration=3.018020546 podStartE2EDuration="3.018020546s" podCreationTimestamp="2026-01-29 06:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:40.015861162 +0000 UTC m=+206.390308772" watchObservedRunningTime="2026-01-29 06:38:40.018020546 +0000 UTC m=+206.392468156" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.022042 5017 scope.go:117] "RemoveContainer" containerID="b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe" Jan 29 06:38:40 crc kubenswrapper[5017]: E0129 06:38:40.026113 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe\": container with ID starting with b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe not found: ID does not exist" containerID="b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.026164 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe"} err="failed to get container status \"b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe\": rpc error: code = NotFound desc = could not find container \"b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe\": container with ID starting with b0d69d6ff3f2e0bb0fbc1e672103b56d50b076857ca6d6947071991faeb727fe not found: ID does not exist" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.026194 5017 scope.go:117] "RemoveContainer" containerID="84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf" Jan 29 06:38:40 crc kubenswrapper[5017]: E0129 06:38:40.030077 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf\": container with ID starting with 84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf not found: ID does not exist" containerID="84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.030122 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf"} err="failed to get container status \"84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf\": rpc error: code = NotFound desc = could not find container \"84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf\": container with ID starting with 84b911a398841097e37db133ee7a27d73db7f5a3cbe3e84ebb3788cea3514acf not found: ID does not exist" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.030153 5017 scope.go:117] "RemoveContainer" containerID="c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56" Jan 29 06:38:40 crc kubenswrapper[5017]: E0129 06:38:40.034615 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56\": container with ID starting with c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56 not found: ID does not exist" containerID="c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.034660 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56"} err="failed to get container status \"c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56\": rpc error: code = NotFound desc = could not find container \"c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56\": container with ID starting with c936a40f83f79a706574e2b5115ede9054a6d3ac6993796bb06d5f6df4f34a56 not found: ID does not exist" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.038007 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l5h7"] Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.053480 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l5h7"] Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.109972 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" podStartSLOduration=3.109931581 podStartE2EDuration="3.109931581s" podCreationTimestamp="2026-01-29 06:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:40.108242412 +0000 UTC m=+206.482690022" watchObservedRunningTime="2026-01-29 06:38:40.109931581 +0000 UTC m=+206.484379191" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.330779 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" path="/var/lib/kubelet/pods/a83caf08-35b0-460b-ba5e-1db0c6cab902/volumes" Jan 29 06:38:40 crc kubenswrapper[5017]: I0129 06:38:40.375375 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:44 crc kubenswrapper[5017]: I0129 06:38:44.384325 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-969vd" Jan 29 06:38:44 crc kubenswrapper[5017]: I0129 06:38:44.528445 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:38:47 crc kubenswrapper[5017]: I0129 06:38:47.598084 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:38:47 crc kubenswrapper[5017]: I0129 06:38:47.679255 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.107579 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" podUID="2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" containerName="oauth-openshift" containerID="cri-o://7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82" gracePeriod=15 Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.770158 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.930663 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-session\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.930761 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-service-ca\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.930798 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-dir\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.930864 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-policies\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.930907 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-ocp-branding-template\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.930931 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-trusted-ca-bundle\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.930995 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-provider-selection\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.931025 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-error\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.931011 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.931063 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-serving-cert\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.931098 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-idp-0-file-data\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.931146 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-login\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.931186 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ccwx\" (UniqueName: \"kubernetes.io/projected/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-kube-api-access-2ccwx\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.931251 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-cliconfig\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.931296 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-router-certs\") pod \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\" (UID: \"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81\") " Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.931584 5017 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.932442 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.932471 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.933005 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.934868 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.941271 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.944287 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.944795 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-kube-api-access-2ccwx" (OuterVolumeSpecName: "kube-api-access-2ccwx") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "kube-api-access-2ccwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.946744 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.947048 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.947992 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.959767 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.962017 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:49 crc kubenswrapper[5017]: I0129 06:38:49.964334 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" (UID: "2c053f98-2b15-48b7-9cf8-b8cdb0b29d81"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032500 5017 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032538 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032556 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032575 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032593 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032608 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032624 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032639 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032654 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ccwx\" (UniqueName: \"kubernetes.io/projected/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-kube-api-access-2ccwx\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032667 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032681 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032695 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.032712 5017 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.033470 5017 generic.go:334] "Generic (PLEG): container finished" podID="2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" containerID="7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82" exitCode=0 Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.033511 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" event={"ID":"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81","Type":"ContainerDied","Data":"7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82"} Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.033547 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" event={"ID":"2c053f98-2b15-48b7-9cf8-b8cdb0b29d81","Type":"ContainerDied","Data":"f2f89508122e2fcb48241ca8b0251d71128f24d5d561f89148b40e603a333799"} Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.033567 5017 scope.go:117] "RemoveContainer" containerID="7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.033753 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pws6m" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.065250 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pws6m"] Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.065610 5017 scope.go:117] "RemoveContainer" containerID="7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82" Jan 29 06:38:50 crc kubenswrapper[5017]: E0129 06:38:50.066084 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82\": container with ID starting with 7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82 not found: ID does not exist" containerID="7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.066121 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82"} err="failed to get container status \"7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82\": rpc error: code = NotFound desc = could not find container \"7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82\": container with ID starting with 7674cfa68c7975ff5560b6a245a204195eebfe46e380a825f0cbeb60c0549a82 not found: ID does not exist" Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.072917 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pws6m"] Jan 29 06:38:50 crc kubenswrapper[5017]: I0129 06:38:50.324775 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" path="/var/lib/kubelet/pods/2c053f98-2b15-48b7-9cf8-b8cdb0b29d81/volumes" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.891612 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-wftrb"] Jan 29 06:38:53 crc kubenswrapper[5017]: E0129 06:38:53.892152 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerName="registry-server" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.892167 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerName="registry-server" Jan 29 06:38:53 crc kubenswrapper[5017]: E0129 06:38:53.892178 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" containerName="oauth-openshift" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.892184 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" containerName="oauth-openshift" Jan 29 06:38:53 crc kubenswrapper[5017]: E0129 06:38:53.892192 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerName="extract-content" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.892198 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerName="extract-content" Jan 29 06:38:53 crc kubenswrapper[5017]: E0129 06:38:53.892207 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerName="extract-utilities" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.892213 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerName="extract-utilities" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.892299 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83caf08-35b0-460b-ba5e-1db0c6cab902" containerName="registry-server" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.892314 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c053f98-2b15-48b7-9cf8-b8cdb0b29d81" containerName="oauth-openshift" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.892738 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.895090 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.895509 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.896091 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.896256 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.896335 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.897251 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.897423 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.897442 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.897725 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.897823 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.898414 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.906759 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.907898 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.908592 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-wftrb"] Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.914039 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.915648 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995485 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f25d33d3-6ae5-4735-af78-acd7de2143e1-audit-dir\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995557 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995596 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995619 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995643 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995670 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995700 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2h7x\" (UniqueName: \"kubernetes.io/projected/f25d33d3-6ae5-4735-af78-acd7de2143e1-kube-api-access-d2h7x\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995738 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995757 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995780 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995800 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995821 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995847 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-audit-policies\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:53 crc kubenswrapper[5017]: I0129 06:38:53.995866 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097446 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097534 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097577 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097610 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2h7x\" (UniqueName: \"kubernetes.io/projected/f25d33d3-6ae5-4735-af78-acd7de2143e1-kube-api-access-d2h7x\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097665 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097709 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097741 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097776 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097803 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097849 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-audit-policies\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097881 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097916 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f25d33d3-6ae5-4735-af78-acd7de2143e1-audit-dir\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.097973 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.098011 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.098350 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f25d33d3-6ae5-4735-af78-acd7de2143e1-audit-dir\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.098634 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.099028 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-audit-policies\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.099213 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.099370 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.101720 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.103411 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.103691 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.103878 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.105001 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.108345 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.110612 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.119293 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f25d33d3-6ae5-4735-af78-acd7de2143e1-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.121694 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2h7x\" (UniqueName: \"kubernetes.io/projected/f25d33d3-6ae5-4735-af78-acd7de2143e1-kube-api-access-d2h7x\") pod \"oauth-openshift-7f54ff7574-wftrb\" (UID: \"f25d33d3-6ae5-4735-af78-acd7de2143e1\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.216568 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:54 crc kubenswrapper[5017]: I0129 06:38:54.705517 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-wftrb"] Jan 29 06:38:54 crc kubenswrapper[5017]: W0129 06:38:54.718079 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25d33d3_6ae5_4735_af78_acd7de2143e1.slice/crio-ea08d6eec6e91cf3f6d214fd4d2541905220d450fd2b4243fa055085d74c453c WatchSource:0}: Error finding container ea08d6eec6e91cf3f6d214fd4d2541905220d450fd2b4243fa055085d74c453c: Status 404 returned error can't find the container with id ea08d6eec6e91cf3f6d214fd4d2541905220d450fd2b4243fa055085d74c453c Jan 29 06:38:55 crc kubenswrapper[5017]: I0129 06:38:55.068424 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" event={"ID":"f25d33d3-6ae5-4735-af78-acd7de2143e1","Type":"ContainerStarted","Data":"830106c0d6b31c4724de91edf63a828ecf8ea3e512cf9ddd6fed068dbb3a8df3"} Jan 29 06:38:55 crc kubenswrapper[5017]: I0129 06:38:55.068484 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" event={"ID":"f25d33d3-6ae5-4735-af78-acd7de2143e1","Type":"ContainerStarted","Data":"ea08d6eec6e91cf3f6d214fd4d2541905220d450fd2b4243fa055085d74c453c"} Jan 29 06:38:55 crc kubenswrapper[5017]: I0129 06:38:55.068889 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:55 crc kubenswrapper[5017]: I0129 06:38:55.092184 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" podStartSLOduration=31.092162861 podStartE2EDuration="31.092162861s" podCreationTimestamp="2026-01-29 06:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:38:55.090300995 +0000 UTC m=+221.464748605" watchObservedRunningTime="2026-01-29 06:38:55.092162861 +0000 UTC m=+221.466610471" Jan 29 06:38:55 crc kubenswrapper[5017]: I0129 06:38:55.323469 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f54ff7574-wftrb" Jan 29 06:38:56 crc kubenswrapper[5017]: I0129 06:38:56.539277 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:38:56 crc kubenswrapper[5017]: I0129 06:38:56.539730 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:38:56 crc kubenswrapper[5017]: I0129 06:38:56.539796 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:38:56 crc kubenswrapper[5017]: I0129 06:38:56.540854 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:38:56 crc kubenswrapper[5017]: I0129 06:38:56.540993 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438" gracePeriod=600 Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.090071 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438" exitCode=0 Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.090138 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438"} Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.090565 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"3028d53bd201fdd844bd103a5d0e85a942fa53d5fbdd5f5e360ee9ecec025248"} Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.241945 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b898d5f7-bhj6k"] Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.242374 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" podUID="cbf7b775-959f-433d-9d7a-748f1004bfc1" containerName="controller-manager" containerID="cri-o://63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802" gracePeriod=30 Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.336386 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj"] Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.336667 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" podUID="19513823-f10b-42e2-a8ea-9d3d423bd0cf" containerName="route-controller-manager" containerID="cri-o://f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd" gracePeriod=30 Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.854028 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.864027 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.950458 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-client-ca\") pod \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.950631 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g54sf\" (UniqueName: \"kubernetes.io/projected/19513823-f10b-42e2-a8ea-9d3d423bd0cf-kube-api-access-g54sf\") pod \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.951640 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-client-ca" (OuterVolumeSpecName: "client-ca") pod "19513823-f10b-42e2-a8ea-9d3d423bd0cf" (UID: "19513823-f10b-42e2-a8ea-9d3d423bd0cf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.952326 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j86rk\" (UniqueName: \"kubernetes.io/projected/cbf7b775-959f-433d-9d7a-748f1004bfc1-kube-api-access-j86rk\") pod \"cbf7b775-959f-433d-9d7a-748f1004bfc1\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.952358 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-proxy-ca-bundles\") pod \"cbf7b775-959f-433d-9d7a-748f1004bfc1\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.952395 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19513823-f10b-42e2-a8ea-9d3d423bd0cf-serving-cert\") pod \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.952420 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-config\") pod \"cbf7b775-959f-433d-9d7a-748f1004bfc1\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.952467 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-client-ca\") pod \"cbf7b775-959f-433d-9d7a-748f1004bfc1\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.952530 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf7b775-959f-433d-9d7a-748f1004bfc1-serving-cert\") pod \"cbf7b775-959f-433d-9d7a-748f1004bfc1\" (UID: \"cbf7b775-959f-433d-9d7a-748f1004bfc1\") " Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.952573 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-config\") pod \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\" (UID: \"19513823-f10b-42e2-a8ea-9d3d423bd0cf\") " Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.952907 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.953414 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cbf7b775-959f-433d-9d7a-748f1004bfc1" (UID: "cbf7b775-959f-433d-9d7a-748f1004bfc1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.953521 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-client-ca" (OuterVolumeSpecName: "client-ca") pod "cbf7b775-959f-433d-9d7a-748f1004bfc1" (UID: "cbf7b775-959f-433d-9d7a-748f1004bfc1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.953553 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-config" (OuterVolumeSpecName: "config") pod "cbf7b775-959f-433d-9d7a-748f1004bfc1" (UID: "cbf7b775-959f-433d-9d7a-748f1004bfc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.953526 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-config" (OuterVolumeSpecName: "config") pod "19513823-f10b-42e2-a8ea-9d3d423bd0cf" (UID: "19513823-f10b-42e2-a8ea-9d3d423bd0cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.958046 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf7b775-959f-433d-9d7a-748f1004bfc1-kube-api-access-j86rk" (OuterVolumeSpecName: "kube-api-access-j86rk") pod "cbf7b775-959f-433d-9d7a-748f1004bfc1" (UID: "cbf7b775-959f-433d-9d7a-748f1004bfc1"). InnerVolumeSpecName "kube-api-access-j86rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.958641 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf7b775-959f-433d-9d7a-748f1004bfc1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cbf7b775-959f-433d-9d7a-748f1004bfc1" (UID: "cbf7b775-959f-433d-9d7a-748f1004bfc1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.958694 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19513823-f10b-42e2-a8ea-9d3d423bd0cf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "19513823-f10b-42e2-a8ea-9d3d423bd0cf" (UID: "19513823-f10b-42e2-a8ea-9d3d423bd0cf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:38:57 crc kubenswrapper[5017]: I0129 06:38:57.962825 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19513823-f10b-42e2-a8ea-9d3d423bd0cf-kube-api-access-g54sf" (OuterVolumeSpecName: "kube-api-access-g54sf") pod "19513823-f10b-42e2-a8ea-9d3d423bd0cf" (UID: "19513823-f10b-42e2-a8ea-9d3d423bd0cf"). InnerVolumeSpecName "kube-api-access-g54sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.054143 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g54sf\" (UniqueName: \"kubernetes.io/projected/19513823-f10b-42e2-a8ea-9d3d423bd0cf-kube-api-access-g54sf\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.054177 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j86rk\" (UniqueName: \"kubernetes.io/projected/cbf7b775-959f-433d-9d7a-748f1004bfc1-kube-api-access-j86rk\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.054189 5017 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.054202 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19513823-f10b-42e2-a8ea-9d3d423bd0cf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.054214 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.054223 5017 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cbf7b775-959f-433d-9d7a-748f1004bfc1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.054233 5017 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf7b775-959f-433d-9d7a-748f1004bfc1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.054244 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19513823-f10b-42e2-a8ea-9d3d423bd0cf-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.100908 5017 generic.go:334] "Generic (PLEG): container finished" podID="19513823-f10b-42e2-a8ea-9d3d423bd0cf" containerID="f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd" exitCode=0 Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.101014 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" event={"ID":"19513823-f10b-42e2-a8ea-9d3d423bd0cf","Type":"ContainerDied","Data":"f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd"} Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.101066 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" event={"ID":"19513823-f10b-42e2-a8ea-9d3d423bd0cf","Type":"ContainerDied","Data":"5c9bd524fb9e64b66faa0accad9fcd141c08270ea5138c3919fe65e343a58e46"} Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.101086 5017 scope.go:117] "RemoveContainer" containerID="f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.101216 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.104073 5017 generic.go:334] "Generic (PLEG): container finished" podID="cbf7b775-959f-433d-9d7a-748f1004bfc1" containerID="63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802" exitCode=0 Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.104139 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" event={"ID":"cbf7b775-959f-433d-9d7a-748f1004bfc1","Type":"ContainerDied","Data":"63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802"} Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.104182 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" event={"ID":"cbf7b775-959f-433d-9d7a-748f1004bfc1","Type":"ContainerDied","Data":"d9cbcaec563e81ba005af4e92d6f95c2b4513b87cb453fa27e7c97b28c305212"} Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.104279 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b898d5f7-bhj6k" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.374443 5017 scope.go:117] "RemoveContainer" containerID="f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd" Jan 29 06:38:58 crc kubenswrapper[5017]: E0129 06:38:58.399816 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd\": container with ID starting with f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd not found: ID does not exist" containerID="f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.399873 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd"} err="failed to get container status \"f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd\": rpc error: code = NotFound desc = could not find container \"f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd\": container with ID starting with f91b5135ac88a3df30c611a014a04d0b5c596554704363c9ec848ec54777bccd not found: ID does not exist" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.399920 5017 scope.go:117] "RemoveContainer" containerID="63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.506294 5017 scope.go:117] "RemoveContainer" containerID="63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802" Jan 29 06:38:58 crc kubenswrapper[5017]: E0129 06:38:58.508087 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802\": container with ID starting with 63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802 not found: ID does not exist" containerID="63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.508149 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802"} err="failed to get container status \"63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802\": rpc error: code = NotFound desc = could not find container \"63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802\": container with ID starting with 63d6727b0ed8010f241350237d21e5baa1b75074cc10d7ffe30c6d996ca6f802 not found: ID does not exist" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.512170 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b898d5f7-bhj6k"] Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.516777 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8b898d5f7-bhj6k"] Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.542587 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj"] Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.551018 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6487b66959-hhndj"] Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.888864 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7"] Jan 29 06:38:58 crc kubenswrapper[5017]: E0129 06:38:58.889206 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19513823-f10b-42e2-a8ea-9d3d423bd0cf" containerName="route-controller-manager" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.889221 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="19513823-f10b-42e2-a8ea-9d3d423bd0cf" containerName="route-controller-manager" Jan 29 06:38:58 crc kubenswrapper[5017]: E0129 06:38:58.889238 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf7b775-959f-433d-9d7a-748f1004bfc1" containerName="controller-manager" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.889247 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf7b775-959f-433d-9d7a-748f1004bfc1" containerName="controller-manager" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.889380 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="19513823-f10b-42e2-a8ea-9d3d423bd0cf" containerName="route-controller-manager" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.889406 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf7b775-959f-433d-9d7a-748f1004bfc1" containerName="controller-manager" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.889901 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.892584 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.893001 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.893134 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.893370 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.893560 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.893734 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-676669c895-rnsqn"] Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.893806 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.894522 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.896534 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.896746 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.896789 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.896810 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.901810 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.901837 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.905521 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-676669c895-rnsqn"] Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.908198 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7"] Jan 29 06:38:58 crc kubenswrapper[5017]: I0129 06:38:58.909866 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.009748 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-serving-cert\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.009798 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51bbb732-9a92-4298-8137-bf5c80592af1-proxy-ca-bundles\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.009851 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk45v\" (UniqueName: \"kubernetes.io/projected/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-kube-api-access-vk45v\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.009901 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-config\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.009949 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-client-ca\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.010009 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j2hs\" (UniqueName: \"kubernetes.io/projected/51bbb732-9a92-4298-8137-bf5c80592af1-kube-api-access-7j2hs\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.010091 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51bbb732-9a92-4298-8137-bf5c80592af1-client-ca\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.010142 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bbb732-9a92-4298-8137-bf5c80592af1-config\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.010195 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51bbb732-9a92-4298-8137-bf5c80592af1-serving-cert\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.111321 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-serving-cert\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.111387 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51bbb732-9a92-4298-8137-bf5c80592af1-proxy-ca-bundles\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.111426 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk45v\" (UniqueName: \"kubernetes.io/projected/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-kube-api-access-vk45v\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.111457 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-config\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.111491 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-client-ca\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.111518 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j2hs\" (UniqueName: \"kubernetes.io/projected/51bbb732-9a92-4298-8137-bf5c80592af1-kube-api-access-7j2hs\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.111552 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51bbb732-9a92-4298-8137-bf5c80592af1-client-ca\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.111579 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bbb732-9a92-4298-8137-bf5c80592af1-config\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.111618 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51bbb732-9a92-4298-8137-bf5c80592af1-serving-cert\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.113289 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51bbb732-9a92-4298-8137-bf5c80592af1-client-ca\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.113378 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-config\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.113946 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-client-ca\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.114051 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/51bbb732-9a92-4298-8137-bf5c80592af1-proxy-ca-bundles\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.114579 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bbb732-9a92-4298-8137-bf5c80592af1-config\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.118193 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51bbb732-9a92-4298-8137-bf5c80592af1-serving-cert\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.118932 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-serving-cert\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.130007 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk45v\" (UniqueName: \"kubernetes.io/projected/13a8e2b1-10e2-4f3b-a996-32d3a15a79ed-kube-api-access-vk45v\") pod \"route-controller-manager-bfd95c495-sszz7\" (UID: \"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed\") " pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.146654 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j2hs\" (UniqueName: \"kubernetes.io/projected/51bbb732-9a92-4298-8137-bf5c80592af1-kube-api-access-7j2hs\") pod \"controller-manager-676669c895-rnsqn\" (UID: \"51bbb732-9a92-4298-8137-bf5c80592af1\") " pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.218636 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.234520 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.470321 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7"] Jan 29 06:38:59 crc kubenswrapper[5017]: W0129 06:38:59.479225 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13a8e2b1_10e2_4f3b_a996_32d3a15a79ed.slice/crio-56f26eb2032a3efb99f2df55b25fe05e3afaaa9a83cc8714d644bf6b67198627 WatchSource:0}: Error finding container 56f26eb2032a3efb99f2df55b25fe05e3afaaa9a83cc8714d644bf6b67198627: Status 404 returned error can't find the container with id 56f26eb2032a3efb99f2df55b25fe05e3afaaa9a83cc8714d644bf6b67198627 Jan 29 06:38:59 crc kubenswrapper[5017]: I0129 06:38:59.562085 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-676669c895-rnsqn"] Jan 29 06:38:59 crc kubenswrapper[5017]: W0129 06:38:59.569830 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51bbb732_9a92_4298_8137_bf5c80592af1.slice/crio-1b1c8e02223091fc2cd164698cf67c302ab41dcb09e8a4c6e69e743e3b27082f WatchSource:0}: Error finding container 1b1c8e02223091fc2cd164698cf67c302ab41dcb09e8a4c6e69e743e3b27082f: Status 404 returned error can't find the container with id 1b1c8e02223091fc2cd164698cf67c302ab41dcb09e8a4c6e69e743e3b27082f Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.119916 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" event={"ID":"51bbb732-9a92-4298-8137-bf5c80592af1","Type":"ContainerStarted","Data":"825bcfd8d934fb1c693d4aa0355129447f8975f4c5fef10826844fd517831786"} Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.120468 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" event={"ID":"51bbb732-9a92-4298-8137-bf5c80592af1","Type":"ContainerStarted","Data":"1b1c8e02223091fc2cd164698cf67c302ab41dcb09e8a4c6e69e743e3b27082f"} Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.122077 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.123234 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" event={"ID":"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed","Type":"ContainerStarted","Data":"ac9f17e07e13d20b1bf5cd58a4c6e8fc40bd31ad71a392ba681df79db804f504"} Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.123266 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" event={"ID":"13a8e2b1-10e2-4f3b-a996-32d3a15a79ed","Type":"ContainerStarted","Data":"56f26eb2032a3efb99f2df55b25fe05e3afaaa9a83cc8714d644bf6b67198627"} Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.124084 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.134688 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.139825 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.144887 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-676669c895-rnsqn" podStartSLOduration=3.144864572 podStartE2EDuration="3.144864572s" podCreationTimestamp="2026-01-29 06:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:39:00.143834793 +0000 UTC m=+226.518282413" watchObservedRunningTime="2026-01-29 06:39:00.144864572 +0000 UTC m=+226.519312182" Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.165133 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bfd95c495-sszz7" podStartSLOduration=3.165107141 podStartE2EDuration="3.165107141s" podCreationTimestamp="2026-01-29 06:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:39:00.164618217 +0000 UTC m=+226.539065827" watchObservedRunningTime="2026-01-29 06:39:00.165107141 +0000 UTC m=+226.539554751" Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.322305 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19513823-f10b-42e2-a8ea-9d3d423bd0cf" path="/var/lib/kubelet/pods/19513823-f10b-42e2-a8ea-9d3d423bd0cf/volumes" Jan 29 06:39:00 crc kubenswrapper[5017]: I0129 06:39:00.323132 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf7b775-959f-433d-9d7a-748f1004bfc1" path="/var/lib/kubelet/pods/cbf7b775-959f-433d-9d7a-748f1004bfc1/volumes" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.212139 5017 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.213461 5017 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.213628 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.213729 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326" gracePeriod=15 Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.213824 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084" gracePeriod=15 Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.213824 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543" gracePeriod=15 Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.213849 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2" gracePeriod=15 Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.213846 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77" gracePeriod=15 Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.213978 5017 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 06:39:12 crc kubenswrapper[5017]: E0129 06:39:12.214538 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214590 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 06:39:12 crc kubenswrapper[5017]: E0129 06:39:12.214606 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214614 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 06:39:12 crc kubenswrapper[5017]: E0129 06:39:12.214628 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214635 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 06:39:12 crc kubenswrapper[5017]: E0129 06:39:12.214644 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214651 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 06:39:12 crc kubenswrapper[5017]: E0129 06:39:12.214662 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214669 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 06:39:12 crc kubenswrapper[5017]: E0129 06:39:12.214677 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214686 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 06:39:12 crc kubenswrapper[5017]: E0129 06:39:12.214709 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214718 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214861 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214878 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214887 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214898 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214905 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.214916 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.217484 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.217519 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.217543 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.217576 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.217591 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.217611 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.217679 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.217740 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.258101 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 06:39:12 crc kubenswrapper[5017]: E0129 06:39:12.318500 5017 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" volumeName="registry-storage" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.318989 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.319029 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.319059 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.319117 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.319133 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.319160 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.319200 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.319217 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.319272 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.320259 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.320446 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.320496 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.320514 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.320708 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.320891 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.321000 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: I0129 06:39:12.557510 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:12 crc kubenswrapper[5017]: W0129 06:39:12.588662 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-6531d9130ce4e11f759cdf5f60f23634e608e5b268e14b126846909e162b43ef WatchSource:0}: Error finding container 6531d9130ce4e11f759cdf5f60f23634e608e5b268e14b126846909e162b43ef: Status 404 returned error can't find the container with id 6531d9130ce4e11f759cdf5f60f23634e608e5b268e14b126846909e162b43ef Jan 29 06:39:12 crc kubenswrapper[5017]: E0129 06:39:12.591570 5017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f205ab849cbba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 06:39:12.59088377 +0000 UTC m=+238.965331370,LastTimestamp:2026-01-29 06:39:12.59088377 +0000 UTC m=+238.965331370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.207417 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.209650 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.210751 5017 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2" exitCode=0 Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.210778 5017 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084" exitCode=0 Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.210786 5017 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543" exitCode=0 Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.210794 5017 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77" exitCode=2 Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.210864 5017 scope.go:117] "RemoveContainer" containerID="27dd6068095b58038216d1837a6b7966c514c7d8a9db5d144b043d918b757394" Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.213589 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09"} Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.215118 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6531d9130ce4e11f759cdf5f60f23634e608e5b268e14b126846909e162b43ef"} Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.214392 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.216640 5017 generic.go:334] "Generic (PLEG): container finished" podID="50301307-73f6-4035-92d0-e96ac1dcb9b9" containerID="9bb60e6e926b5a1364b194c833406e58255f0892ab95ebce24f26e56b9cfeb7b" exitCode=0 Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.216684 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"50301307-73f6-4035-92d0-e96ac1dcb9b9","Type":"ContainerDied","Data":"9bb60e6e926b5a1364b194c833406e58255f0892ab95ebce24f26e56b9cfeb7b"} Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.217175 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:13 crc kubenswrapper[5017]: I0129 06:39:13.217609 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: E0129 06:39:14.083976 5017 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: E0129 06:39:14.084460 5017 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: E0129 06:39:14.084633 5017 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: E0129 06:39:14.084783 5017 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: E0129 06:39:14.084933 5017 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.084973 5017 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 06:39:14 crc kubenswrapper[5017]: E0129 06:39:14.085114 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.224287 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 06:39:14 crc kubenswrapper[5017]: E0129 06:39:14.286810 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.329947 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.331390 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: E0129 06:39:14.687411 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.688340 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.688696 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.688925 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.693813 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.694734 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.695067 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.695280 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.695562 5017 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.852577 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.852923 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.853070 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.853135 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50301307-73f6-4035-92d0-e96ac1dcb9b9-kube-api-access\") pod \"50301307-73f6-4035-92d0-e96ac1dcb9b9\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.853168 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-kubelet-dir\") pod \"50301307-73f6-4035-92d0-e96ac1dcb9b9\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.853206 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-var-lock\") pod \"50301307-73f6-4035-92d0-e96ac1dcb9b9\" (UID: \"50301307-73f6-4035-92d0-e96ac1dcb9b9\") " Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.853523 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "50301307-73f6-4035-92d0-e96ac1dcb9b9" (UID: "50301307-73f6-4035-92d0-e96ac1dcb9b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.853576 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.853596 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.853614 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.855090 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50301307-73f6-4035-92d0-e96ac1dcb9b9" (UID: "50301307-73f6-4035-92d0-e96ac1dcb9b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.865282 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50301307-73f6-4035-92d0-e96ac1dcb9b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "50301307-73f6-4035-92d0-e96ac1dcb9b9" (UID: "50301307-73f6-4035-92d0-e96ac1dcb9b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.954556 5017 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.954587 5017 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50301307-73f6-4035-92d0-e96ac1dcb9b9-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.954601 5017 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.954609 5017 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.954617 5017 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:14 crc kubenswrapper[5017]: I0129 06:39:14.954624 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50301307-73f6-4035-92d0-e96ac1dcb9b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.231086 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"50301307-73f6-4035-92d0-e96ac1dcb9b9","Type":"ContainerDied","Data":"12822ef1008da815fb732bc32f98110317d583f7edb3bd77d1dfc2c91f832791"} Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.231135 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12822ef1008da815fb732bc32f98110317d583f7edb3bd77d1dfc2c91f832791" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.231133 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.233980 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.235139 5017 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326" exitCode=0 Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.235667 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.236088 5017 scope.go:117] "RemoveContainer" containerID="2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.245032 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.245255 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.245447 5017 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.249904 5017 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.250228 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.250502 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.254407 5017 scope.go:117] "RemoveContainer" containerID="c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.271218 5017 scope.go:117] "RemoveContainer" containerID="a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.288998 5017 scope.go:117] "RemoveContainer" containerID="4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.303157 5017 scope.go:117] "RemoveContainer" containerID="fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.331614 5017 scope.go:117] "RemoveContainer" containerID="6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.352926 5017 scope.go:117] "RemoveContainer" containerID="2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2" Jan 29 06:39:15 crc kubenswrapper[5017]: E0129 06:39:15.353481 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\": container with ID starting with 2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2 not found: ID does not exist" containerID="2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.353536 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2"} err="failed to get container status \"2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\": rpc error: code = NotFound desc = could not find container \"2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2\": container with ID starting with 2ef12beaf08b33d7694b3ea5a4705e37ec4a5b53f9cf1a7e0dd41a78d99bc8f2 not found: ID does not exist" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.353566 5017 scope.go:117] "RemoveContainer" containerID="c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084" Jan 29 06:39:15 crc kubenswrapper[5017]: E0129 06:39:15.353943 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\": container with ID starting with c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084 not found: ID does not exist" containerID="c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.353985 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084"} err="failed to get container status \"c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\": rpc error: code = NotFound desc = could not find container \"c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084\": container with ID starting with c256463618132e5305eb6fed3ef923e6a3c200bbc4b917e7f678e1c78af05084 not found: ID does not exist" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.354011 5017 scope.go:117] "RemoveContainer" containerID="a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543" Jan 29 06:39:15 crc kubenswrapper[5017]: E0129 06:39:15.354292 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\": container with ID starting with a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543 not found: ID does not exist" containerID="a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.354326 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543"} err="failed to get container status \"a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\": rpc error: code = NotFound desc = could not find container \"a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543\": container with ID starting with a82e8858b15c840a63737058d3af382c1a6469d7d1d3ba6e44759f33e55f2543 not found: ID does not exist" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.354347 5017 scope.go:117] "RemoveContainer" containerID="4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77" Jan 29 06:39:15 crc kubenswrapper[5017]: E0129 06:39:15.355383 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\": container with ID starting with 4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77 not found: ID does not exist" containerID="4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.355433 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77"} err="failed to get container status \"4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\": rpc error: code = NotFound desc = could not find container \"4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77\": container with ID starting with 4a303c0532bb4a555700875ed27e56205d709658417a72f47868fb7bc8291a77 not found: ID does not exist" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.355475 5017 scope.go:117] "RemoveContainer" containerID="fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326" Jan 29 06:39:15 crc kubenswrapper[5017]: E0129 06:39:15.355902 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\": container with ID starting with fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326 not found: ID does not exist" containerID="fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.355930 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326"} err="failed to get container status \"fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\": rpc error: code = NotFound desc = could not find container \"fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326\": container with ID starting with fe28ec8d64ea50ecca5d4531440159b5277aff7de80b18813bb91f5c32923326 not found: ID does not exist" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.355946 5017 scope.go:117] "RemoveContainer" containerID="6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3" Jan 29 06:39:15 crc kubenswrapper[5017]: E0129 06:39:15.357030 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\": container with ID starting with 6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3 not found: ID does not exist" containerID="6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3" Jan 29 06:39:15 crc kubenswrapper[5017]: I0129 06:39:15.357061 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3"} err="failed to get container status \"6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\": rpc error: code = NotFound desc = could not find container \"6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3\": container with ID starting with 6f97561ea3b9baa3440e5148060c80d57e9cf17837e7799b057f618d9084e8a3 not found: ID does not exist" Jan 29 06:39:15 crc kubenswrapper[5017]: E0129 06:39:15.489796 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Jan 29 06:39:15 crc kubenswrapper[5017]: E0129 06:39:15.697531 5017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f205ab849cbba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 06:39:12.59088377 +0000 UTC m=+238.965331370,LastTimestamp:2026-01-29 06:39:12.59088377 +0000 UTC m=+238.965331370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 06:39:16 crc kubenswrapper[5017]: I0129 06:39:16.324028 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 06:39:17 crc kubenswrapper[5017]: E0129 06:39:17.090616 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Jan 29 06:39:20 crc kubenswrapper[5017]: E0129 06:39:20.292647 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="6.4s" Jan 29 06:39:24 crc kubenswrapper[5017]: I0129 06:39:24.322640 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:24 crc kubenswrapper[5017]: I0129 06:39:24.323592 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:25 crc kubenswrapper[5017]: E0129 06:39:25.057479 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:39:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:39:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:39:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:39:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:25 crc kubenswrapper[5017]: E0129 06:39:25.059108 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:25 crc kubenswrapper[5017]: E0129 06:39:25.059638 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:25 crc kubenswrapper[5017]: E0129 06:39:25.060040 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:25 crc kubenswrapper[5017]: E0129 06:39:25.060517 5017 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:25 crc kubenswrapper[5017]: E0129 06:39:25.060550 5017 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:39:25 crc kubenswrapper[5017]: E0129 06:39:25.698553 5017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f205ab849cbba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 06:39:12.59088377 +0000 UTC m=+238.965331370,LastTimestamp:2026-01-29 06:39:12.59088377 +0000 UTC m=+238.965331370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 06:39:26 crc kubenswrapper[5017]: I0129 06:39:26.315755 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:26 crc kubenswrapper[5017]: I0129 06:39:26.317368 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:26 crc kubenswrapper[5017]: I0129 06:39:26.317804 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:26 crc kubenswrapper[5017]: I0129 06:39:26.334422 5017 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:26 crc kubenswrapper[5017]: I0129 06:39:26.335080 5017 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:26 crc kubenswrapper[5017]: E0129 06:39:26.336086 5017 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:26 crc kubenswrapper[5017]: I0129 06:39:26.336998 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:26 crc kubenswrapper[5017]: E0129 06:39:26.693548 5017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="7s" Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.316977 5017 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="51294c5a54fa8b46c8805bb916c74b615649f97ed94b44ca4535eed857a877d6" exitCode=0 Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.317070 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"51294c5a54fa8b46c8805bb916c74b615649f97ed94b44ca4535eed857a877d6"} Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.317526 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7b8a22da9d74ce10e07493dae24a091f4356309af59c5a80a7046cb93959e399"} Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.318276 5017 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.318376 5017 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.318926 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.319320 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:27 crc kubenswrapper[5017]: E0129 06:39:27.319919 5017 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.321477 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.321533 5017 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d" exitCode=1 Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.321567 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d"} Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.322201 5017 scope.go:117] "RemoveContainer" containerID="1045c0f8fc7c60a5aa6d2e2b449230e4603b08b0b4244c319e296f89b04ab98d" Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.322201 5017 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.322855 5017 status_manager.go:851] "Failed to get status for pod" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:27 crc kubenswrapper[5017]: I0129 06:39:27.323226 5017 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Jan 29 06:39:28 crc kubenswrapper[5017]: I0129 06:39:28.330868 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4231f6e1b77f88c30c8188e210b04d55a2a7b71dffb9545803616a7345fe0f9b"} Jan 29 06:39:28 crc kubenswrapper[5017]: I0129 06:39:28.330924 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"32d90691d2085a9d81483dbb24019549dd04e6425bc2609b7b4f5ca7439e8a33"} Jan 29 06:39:28 crc kubenswrapper[5017]: I0129 06:39:28.330938 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"75b36e4ed772694d3dbe6c2c8aff4db1ecd0e0fd6a98c8ab229b49d72eb796f8"} Jan 29 06:39:28 crc kubenswrapper[5017]: I0129 06:39:28.330985 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5de45cc2f24cd6e23b9bb84ae8b142b5fff04c964a63db839744b986d5135872"} Jan 29 06:39:28 crc kubenswrapper[5017]: I0129 06:39:28.334423 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 06:39:28 crc kubenswrapper[5017]: I0129 06:39:28.334487 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2d59a765e8869fab8fdbe11b48752f4b3ede6e9d2d1836f3e134650348775255"} Jan 29 06:39:29 crc kubenswrapper[5017]: I0129 06:39:29.344879 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"56c67efc686031603eadc959b9e5a1abab687948e5695fb6fe2f279a8b1b7cdb"} Jan 29 06:39:29 crc kubenswrapper[5017]: I0129 06:39:29.345579 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:29 crc kubenswrapper[5017]: I0129 06:39:29.345336 5017 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:29 crc kubenswrapper[5017]: I0129 06:39:29.345610 5017 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:30 crc kubenswrapper[5017]: I0129 06:39:30.979062 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:39:31 crc kubenswrapper[5017]: I0129 06:39:31.337175 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:31 crc kubenswrapper[5017]: I0129 06:39:31.337473 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:31 crc kubenswrapper[5017]: I0129 06:39:31.346145 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:34 crc kubenswrapper[5017]: I0129 06:39:34.354762 5017 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:35 crc kubenswrapper[5017]: I0129 06:39:35.387375 5017 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:35 crc kubenswrapper[5017]: I0129 06:39:35.387428 5017 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:35 crc kubenswrapper[5017]: I0129 06:39:35.392508 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:35 crc kubenswrapper[5017]: I0129 06:39:35.396100 5017 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f6ec597e-e103-492e-9891-4426a5ef69e2" Jan 29 06:39:35 crc kubenswrapper[5017]: I0129 06:39:35.508518 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:39:35 crc kubenswrapper[5017]: I0129 06:39:35.515155 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:39:36 crc kubenswrapper[5017]: I0129 06:39:36.393031 5017 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:36 crc kubenswrapper[5017]: I0129 06:39:36.393480 5017 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8c67cd79-8431-401b-8f03-9387813b30ed" Jan 29 06:39:36 crc kubenswrapper[5017]: I0129 06:39:36.396119 5017 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f6ec597e-e103-492e-9891-4426a5ef69e2" Jan 29 06:39:40 crc kubenswrapper[5017]: I0129 06:39:40.978541 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.055182 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.593480 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.624154 5017 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.627862 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=32.627841877 podStartE2EDuration="32.627841877s" podCreationTimestamp="2026-01-29 06:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:39:34.292260179 +0000 UTC m=+260.666707859" watchObservedRunningTime="2026-01-29 06:39:44.627841877 +0000 UTC m=+271.002289487" Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.629913 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.629994 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.634119 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.673694 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=10.673669203 podStartE2EDuration="10.673669203s" podCreationTimestamp="2026-01-29 06:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:39:44.655537754 +0000 UTC m=+271.029985384" watchObservedRunningTime="2026-01-29 06:39:44.673669203 +0000 UTC m=+271.048116823" Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.786608 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.933156 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 06:39:44 crc kubenswrapper[5017]: I0129 06:39:44.978170 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 06:39:45 crc kubenswrapper[5017]: I0129 06:39:45.235495 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 06:39:45 crc kubenswrapper[5017]: I0129 06:39:45.249058 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 06:39:45 crc kubenswrapper[5017]: I0129 06:39:45.282796 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 06:39:45 crc kubenswrapper[5017]: I0129 06:39:45.455220 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 06:39:45 crc kubenswrapper[5017]: I0129 06:39:45.613602 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 06:39:45 crc kubenswrapper[5017]: I0129 06:39:45.695513 5017 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 06:39:45 crc kubenswrapper[5017]: I0129 06:39:45.695868 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09" gracePeriod=5 Jan 29 06:39:46 crc kubenswrapper[5017]: I0129 06:39:46.153499 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 06:39:46 crc kubenswrapper[5017]: I0129 06:39:46.180658 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 06:39:46 crc kubenswrapper[5017]: I0129 06:39:46.510357 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 06:39:46 crc kubenswrapper[5017]: I0129 06:39:46.633165 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 06:39:46 crc kubenswrapper[5017]: I0129 06:39:46.734676 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 06:39:46 crc kubenswrapper[5017]: I0129 06:39:46.886472 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 06:39:46 crc kubenswrapper[5017]: I0129 06:39:46.919771 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 06:39:46 crc kubenswrapper[5017]: I0129 06:39:46.956044 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.102647 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.139795 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.148981 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.166586 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.300151 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.400378 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.428061 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.602926 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.615543 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.624730 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.735697 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.781080 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.797235 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.798841 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.870558 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 06:39:47 crc kubenswrapper[5017]: I0129 06:39:47.886735 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.028855 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.183417 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.248778 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.371108 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.411344 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.425607 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.488423 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.617283 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.701469 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.709709 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.807587 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.836422 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 06:39:48 crc kubenswrapper[5017]: I0129 06:39:48.984976 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.011764 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.095328 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.138640 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.189172 5017 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.255472 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.304384 5017 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.348321 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.446165 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.448859 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.455151 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.492316 5017 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.504508 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.658422 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.686007 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.687802 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.790156 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.823884 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.914706 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 06:39:49 crc kubenswrapper[5017]: I0129 06:39:49.994330 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.001876 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.011704 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.172151 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.215866 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.232016 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.260109 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.350672 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.368536 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.401743 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.422321 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.449839 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.469799 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.607313 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.639529 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.662422 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.667859 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.678390 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.715818 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.912127 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.944983 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 06:39:50 crc kubenswrapper[5017]: I0129 06:39:50.958274 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.049723 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.153692 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.191013 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.211999 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.212372 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.265031 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.283239 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.283313 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.295983 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.314693 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.314832 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.314884 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.314889 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.314943 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.314998 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.315070 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.315197 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.315245 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.315602 5017 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.315639 5017 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.315659 5017 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.315678 5017 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.325489 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.341060 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.383298 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.415337 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.417228 5017 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.422929 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.429707 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.447790 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.459652 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.459714 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.494140 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.494216 5017 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09" exitCode=137 Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.494282 5017 scope.go:117] "RemoveContainer" containerID="86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.494365 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.516467 5017 scope.go:117] "RemoveContainer" containerID="86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09" Jan 29 06:39:51 crc kubenswrapper[5017]: E0129 06:39:51.518713 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09\": container with ID starting with 86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09 not found: ID does not exist" containerID="86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.518797 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09"} err="failed to get container status \"86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09\": rpc error: code = NotFound desc = could not find container \"86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09\": container with ID starting with 86e64265182bfae95e67dcf9629cdd2d5938ec048f6340a37327840b5f4efe09 not found: ID does not exist" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.531758 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.547512 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.612881 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.684003 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.695265 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.705478 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.834484 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.854394 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.911458 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 06:39:51 crc kubenswrapper[5017]: I0129 06:39:51.950839 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.007883 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.085339 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.090116 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.146395 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.331177 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.331797 5017 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.346846 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.354137 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.354216 5017 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8e0cec49-7530-427c-b0a6-7dfc784b0df4" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.362767 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.362818 5017 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8e0cec49-7530-427c-b0a6-7dfc784b0df4" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.366073 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.421139 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.435537 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.452525 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.555044 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.710152 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.723208 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.724917 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.749625 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.754383 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.774559 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.790777 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.828265 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.846992 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.914568 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 06:39:52 crc kubenswrapper[5017]: I0129 06:39:52.946248 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.055009 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.064634 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.095329 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.183655 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.224079 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.371807 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.379556 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.568867 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.724343 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.816524 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.833632 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 06:39:53 crc kubenswrapper[5017]: I0129 06:39:53.839362 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.020610 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.047975 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.106251 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.181085 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.210843 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.236350 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.264495 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.266231 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.303345 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.324313 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.328860 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.343269 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.361148 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.377367 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.388679 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.409451 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.484722 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.651838 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.707366 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.712316 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.875470 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.902907 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 06:39:54 crc kubenswrapper[5017]: I0129 06:39:54.915784 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.020452 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.037932 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.054854 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.081593 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.190209 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.207496 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.345881 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.379131 5017 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.392574 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.400586 5017 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.433327 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.464455 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.480211 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.523514 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.566779 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.618866 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.626236 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.722471 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.745605 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 06:39:55 crc kubenswrapper[5017]: I0129 06:39:55.974151 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.058210 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.070025 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.094096 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.236246 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.245255 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.286127 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.350644 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.383255 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.410509 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.440365 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.467436 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.588096 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.594398 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.603459 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.620280 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.653073 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.737769 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.782805 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.801032 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 06:39:56 crc kubenswrapper[5017]: I0129 06:39:56.806145 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.116477 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.154864 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.236291 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.244940 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.275140 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.275711 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.275907 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.397528 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.439920 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.454682 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.460511 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.650246 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.667571 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.772872 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.884803 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.916500 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 06:39:57 crc kubenswrapper[5017]: I0129 06:39:57.955120 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 06:39:58 crc kubenswrapper[5017]: I0129 06:39:58.040145 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 06:39:58 crc kubenswrapper[5017]: I0129 06:39:58.048198 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 06:39:58 crc kubenswrapper[5017]: I0129 06:39:58.075734 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 06:39:58 crc kubenswrapper[5017]: I0129 06:39:58.125095 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 06:39:58 crc kubenswrapper[5017]: I0129 06:39:58.471120 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 06:39:58 crc kubenswrapper[5017]: I0129 06:39:58.543659 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 06:39:58 crc kubenswrapper[5017]: I0129 06:39:58.600010 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 06:39:58 crc kubenswrapper[5017]: I0129 06:39:58.638362 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.135536 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.278746 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.358145 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.421614 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.421885 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.451864 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.591793 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.595751 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.668120 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.750004 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.837210 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 06:39:59 crc kubenswrapper[5017]: I0129 06:39:59.986122 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 06:40:00 crc kubenswrapper[5017]: I0129 06:40:00.237998 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 06:40:00 crc kubenswrapper[5017]: I0129 06:40:00.380743 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 06:40:00 crc kubenswrapper[5017]: I0129 06:40:00.435159 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:40:00 crc kubenswrapper[5017]: I0129 06:40:00.479827 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:40:00 crc kubenswrapper[5017]: I0129 06:40:00.643556 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 06:40:02 crc kubenswrapper[5017]: I0129 06:40:02.345854 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.796984 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jxrzm"] Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.798751 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jxrzm" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerName="registry-server" containerID="cri-o://f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37" gracePeriod=30 Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.802029 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-969vd"] Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.802924 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-969vd" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerName="registry-server" containerID="cri-o://16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d" gracePeriod=30 Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.811382 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8chq"] Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.814913 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" podUID="bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" containerName="marketplace-operator" containerID="cri-o://5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a" gracePeriod=30 Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.842687 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcr4n"] Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.843261 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zcr4n" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerName="registry-server" containerID="cri-o://2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301" gracePeriod=30 Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.861860 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cqwl"] Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.862241 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2cqwl" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerName="registry-server" containerID="cri-o://ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613" gracePeriod=30 Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.868710 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6hp88"] Jan 29 06:40:09 crc kubenswrapper[5017]: E0129 06:40:09.869127 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" containerName="installer" Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.869161 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" containerName="installer" Jan 29 06:40:09 crc kubenswrapper[5017]: E0129 06:40:09.869200 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.869213 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.869389 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.869420 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="50301307-73f6-4035-92d0-e96ac1dcb9b9" containerName="installer" Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.870144 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.876637 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6hp88"] Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.901756 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3fc9269a-d09b-426d-988d-05995e1d4014-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6hp88\" (UID: \"3fc9269a-d09b-426d-988d-05995e1d4014\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.901814 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3fc9269a-d09b-426d-988d-05995e1d4014-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6hp88\" (UID: \"3fc9269a-d09b-426d-988d-05995e1d4014\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:09 crc kubenswrapper[5017]: I0129 06:40:09.901856 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbfl\" (UniqueName: \"kubernetes.io/projected/3fc9269a-d09b-426d-988d-05995e1d4014-kube-api-access-vsbfl\") pod \"marketplace-operator-79b997595-6hp88\" (UID: \"3fc9269a-d09b-426d-988d-05995e1d4014\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.003869 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3fc9269a-d09b-426d-988d-05995e1d4014-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6hp88\" (UID: \"3fc9269a-d09b-426d-988d-05995e1d4014\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.003993 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbfl\" (UniqueName: \"kubernetes.io/projected/3fc9269a-d09b-426d-988d-05995e1d4014-kube-api-access-vsbfl\") pod \"marketplace-operator-79b997595-6hp88\" (UID: \"3fc9269a-d09b-426d-988d-05995e1d4014\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.004197 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3fc9269a-d09b-426d-988d-05995e1d4014-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6hp88\" (UID: \"3fc9269a-d09b-426d-988d-05995e1d4014\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.005784 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3fc9269a-d09b-426d-988d-05995e1d4014-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6hp88\" (UID: \"3fc9269a-d09b-426d-988d-05995e1d4014\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.024024 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3fc9269a-d09b-426d-988d-05995e1d4014-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6hp88\" (UID: \"3fc9269a-d09b-426d-988d-05995e1d4014\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.029549 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbfl\" (UniqueName: \"kubernetes.io/projected/3fc9269a-d09b-426d-988d-05995e1d4014-kube-api-access-vsbfl\") pod \"marketplace-operator-79b997595-6hp88\" (UID: \"3fc9269a-d09b-426d-988d-05995e1d4014\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.328499 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.334301 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.340758 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.348262 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.375669 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.377545 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-969vd" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408242 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-trusted-ca\") pod \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408311 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtkdc\" (UniqueName: \"kubernetes.io/projected/402aa844-38d7-44aa-bfa8-8db490d3aa4b-kube-api-access-qtkdc\") pod \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408374 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-utilities\") pod \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408410 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-operator-metrics\") pod \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408448 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-utilities\") pod \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408465 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-catalog-content\") pod \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408518 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-utilities\") pod \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408544 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87zhg\" (UniqueName: \"kubernetes.io/projected/d8f46fb7-929f-4d96-a5ca-4fc475b78342-kube-api-access-87zhg\") pod \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408626 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-catalog-content\") pod \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\" (UID: \"402aa844-38d7-44aa-bfa8-8db490d3aa4b\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408650 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh9m4\" (UniqueName: \"kubernetes.io/projected/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-kube-api-access-rh9m4\") pod \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\" (UID: \"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408692 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vck8l\" (UniqueName: \"kubernetes.io/projected/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-kube-api-access-vck8l\") pod \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\" (UID: \"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408728 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-catalog-content\") pod \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408762 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-utilities\") pod \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408779 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-catalog-content\") pod \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\" (UID: \"d8f46fb7-929f-4d96-a5ca-4fc475b78342\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.408870 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trlr7\" (UniqueName: \"kubernetes.io/projected/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-kube-api-access-trlr7\") pod \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\" (UID: \"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304\") " Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.409385 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" (UID: "bb138e77-1a05-4fd5-9fd7-47843ef6aa0b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.411737 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-utilities" (OuterVolumeSpecName: "utilities") pod "d8f46fb7-929f-4d96-a5ca-4fc475b78342" (UID: "d8f46fb7-929f-4d96-a5ca-4fc475b78342"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.412204 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-utilities" (OuterVolumeSpecName: "utilities") pod "402aa844-38d7-44aa-bfa8-8db490d3aa4b" (UID: "402aa844-38d7-44aa-bfa8-8db490d3aa4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.412259 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-utilities" (OuterVolumeSpecName: "utilities") pod "515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" (UID: "515e73a4-3f9f-40aa-bd4b-c4ac2d55f304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.412684 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-utilities" (OuterVolumeSpecName: "utilities") pod "95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" (UID: "95fabedb-25f2-43b3-a1dc-907c7e3ad4c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.418440 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-kube-api-access-trlr7" (OuterVolumeSpecName: "kube-api-access-trlr7") pod "515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" (UID: "515e73a4-3f9f-40aa-bd4b-c4ac2d55f304"). InnerVolumeSpecName "kube-api-access-trlr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.419980 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-kube-api-access-rh9m4" (OuterVolumeSpecName: "kube-api-access-rh9m4") pod "bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" (UID: "bb138e77-1a05-4fd5-9fd7-47843ef6aa0b"). InnerVolumeSpecName "kube-api-access-rh9m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.420625 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402aa844-38d7-44aa-bfa8-8db490d3aa4b-kube-api-access-qtkdc" (OuterVolumeSpecName: "kube-api-access-qtkdc") pod "402aa844-38d7-44aa-bfa8-8db490d3aa4b" (UID: "402aa844-38d7-44aa-bfa8-8db490d3aa4b"). InnerVolumeSpecName "kube-api-access-qtkdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.421199 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" (UID: "bb138e77-1a05-4fd5-9fd7-47843ef6aa0b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.421484 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f46fb7-929f-4d96-a5ca-4fc475b78342-kube-api-access-87zhg" (OuterVolumeSpecName: "kube-api-access-87zhg") pod "d8f46fb7-929f-4d96-a5ca-4fc475b78342" (UID: "d8f46fb7-929f-4d96-a5ca-4fc475b78342"). InnerVolumeSpecName "kube-api-access-87zhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.429538 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-kube-api-access-vck8l" (OuterVolumeSpecName: "kube-api-access-vck8l") pod "95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" (UID: "95fabedb-25f2-43b3-a1dc-907c7e3ad4c2"). InnerVolumeSpecName "kube-api-access-vck8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.469380 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "402aa844-38d7-44aa-bfa8-8db490d3aa4b" (UID: "402aa844-38d7-44aa-bfa8-8db490d3aa4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.482670 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" (UID: "515e73a4-3f9f-40aa-bd4b-c4ac2d55f304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.493907 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" (UID: "95fabedb-25f2-43b3-a1dc-907c7e3ad4c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511377 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511412 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511422 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trlr7\" (UniqueName: \"kubernetes.io/projected/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-kube-api-access-trlr7\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511434 5017 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511444 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtkdc\" (UniqueName: \"kubernetes.io/projected/402aa844-38d7-44aa-bfa8-8db490d3aa4b-kube-api-access-qtkdc\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511453 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511462 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511473 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511534 5017 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511574 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511585 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87zhg\" (UniqueName: \"kubernetes.io/projected/d8f46fb7-929f-4d96-a5ca-4fc475b78342-kube-api-access-87zhg\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511595 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402aa844-38d7-44aa-bfa8-8db490d3aa4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511603 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh9m4\" (UniqueName: \"kubernetes.io/projected/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b-kube-api-access-rh9m4\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.511612 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vck8l\" (UniqueName: \"kubernetes.io/projected/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2-kube-api-access-vck8l\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.581684 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8f46fb7-929f-4d96-a5ca-4fc475b78342" (UID: "d8f46fb7-929f-4d96-a5ca-4fc475b78342"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.613031 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8f46fb7-929f-4d96-a5ca-4fc475b78342-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.622858 5017 generic.go:334] "Generic (PLEG): container finished" podID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerID="16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d" exitCode=0 Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.622985 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-969vd" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.623023 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-969vd" event={"ID":"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2","Type":"ContainerDied","Data":"16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.623134 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-969vd" event={"ID":"95fabedb-25f2-43b3-a1dc-907c7e3ad4c2","Type":"ContainerDied","Data":"c7d4574b015ce0704d6160068b72ebf7e3202ea16b60ecc0b055c6d283614c14"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.623157 5017 scope.go:117] "RemoveContainer" containerID="16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.626611 5017 generic.go:334] "Generic (PLEG): container finished" podID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerID="ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613" exitCode=0 Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.626674 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cqwl" event={"ID":"d8f46fb7-929f-4d96-a5ca-4fc475b78342","Type":"ContainerDied","Data":"ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.626737 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cqwl" event={"ID":"d8f46fb7-929f-4d96-a5ca-4fc475b78342","Type":"ContainerDied","Data":"17185ad7dd95fe02645fee9ec954fbd2a276134de547f0bae383404856e7fa4d"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.626800 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cqwl" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.628160 5017 generic.go:334] "Generic (PLEG): container finished" podID="bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" containerID="5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a" exitCode=0 Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.628216 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.628249 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" event={"ID":"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b","Type":"ContainerDied","Data":"5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.628316 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8chq" event={"ID":"bb138e77-1a05-4fd5-9fd7-47843ef6aa0b","Type":"ContainerDied","Data":"952eba734f33057dd178459401dd31a1f4583986ac49d044660d6aa52495bc24"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.630839 5017 generic.go:334] "Generic (PLEG): container finished" podID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerID="f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37" exitCode=0 Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.630909 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxrzm" event={"ID":"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304","Type":"ContainerDied","Data":"f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.630938 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxrzm" event={"ID":"515e73a4-3f9f-40aa-bd4b-c4ac2d55f304","Type":"ContainerDied","Data":"4439033efc00639dd2e31ef73ef2d6ced28ccfec1ff6e0c9098e3723e218953a"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.631027 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxrzm" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.634574 5017 generic.go:334] "Generic (PLEG): container finished" podID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerID="2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301" exitCode=0 Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.634623 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcr4n" event={"ID":"402aa844-38d7-44aa-bfa8-8db490d3aa4b","Type":"ContainerDied","Data":"2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.634663 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcr4n" event={"ID":"402aa844-38d7-44aa-bfa8-8db490d3aa4b","Type":"ContainerDied","Data":"06459df7bc080380ebcf3afcc4b465453086548a29da725a6500e333f0692a6c"} Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.634745 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcr4n" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.648674 5017 scope.go:117] "RemoveContainer" containerID="dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.669346 5017 scope.go:117] "RemoveContainer" containerID="f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.676010 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-969vd"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.683465 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-969vd"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.692839 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cqwl"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.700448 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2cqwl"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.705885 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8chq"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.712220 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8chq"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.714051 5017 scope.go:117] "RemoveContainer" containerID="16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.714892 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d\": container with ID starting with 16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d not found: ID does not exist" containerID="16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.714932 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d"} err="failed to get container status \"16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d\": rpc error: code = NotFound desc = could not find container \"16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d\": container with ID starting with 16fad0f8ff0eea5356cbcb34689747c4929f7352e4bd570c1870737ec2b6006d not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.714971 5017 scope.go:117] "RemoveContainer" containerID="dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.716193 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050\": container with ID starting with dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050 not found: ID does not exist" containerID="dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.716238 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050"} err="failed to get container status \"dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050\": rpc error: code = NotFound desc = could not find container \"dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050\": container with ID starting with dd6aa5cec1511c954980819c3362e07f28d70dfc4710319fef83738761cd2050 not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.716271 5017 scope.go:117] "RemoveContainer" containerID="f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.716582 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8\": container with ID starting with f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8 not found: ID does not exist" containerID="f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.716608 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8"} err="failed to get container status \"f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8\": rpc error: code = NotFound desc = could not find container \"f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8\": container with ID starting with f20c4f9240178a6b16d72036de81e1bfd4bd3eea795887e3d5113d33ffd2e9a8 not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.716625 5017 scope.go:117] "RemoveContainer" containerID="ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.717631 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jxrzm"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.720633 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jxrzm"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.723397 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcr4n"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.730901 5017 scope.go:117] "RemoveContainer" containerID="900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.731822 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcr4n"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.753282 5017 scope.go:117] "RemoveContainer" containerID="903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.764030 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6hp88"] Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.773320 5017 scope.go:117] "RemoveContainer" containerID="ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.773828 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613\": container with ID starting with ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613 not found: ID does not exist" containerID="ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.773913 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613"} err="failed to get container status \"ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613\": rpc error: code = NotFound desc = could not find container \"ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613\": container with ID starting with ab90f0beb742598659b96e89b85524ea4d3ef0a52fad36e1630c37806d1b9613 not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.773996 5017 scope.go:117] "RemoveContainer" containerID="900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.774412 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10\": container with ID starting with 900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10 not found: ID does not exist" containerID="900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.774441 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10"} err="failed to get container status \"900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10\": rpc error: code = NotFound desc = could not find container \"900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10\": container with ID starting with 900c8d4c1518329eb7f4da32b0c4e2ca477dc4ceb5c0ddfe6d6707ead8441f10 not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.774466 5017 scope.go:117] "RemoveContainer" containerID="903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.774838 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5\": container with ID starting with 903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5 not found: ID does not exist" containerID="903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.774861 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5"} err="failed to get container status \"903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5\": rpc error: code = NotFound desc = could not find container \"903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5\": container with ID starting with 903dc8a1cde43c7ef514ea551e0095d4527883515b7255abebb3b65ef97f30d5 not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.774873 5017 scope.go:117] "RemoveContainer" containerID="5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.789697 5017 scope.go:117] "RemoveContainer" containerID="5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.790265 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a\": container with ID starting with 5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a not found: ID does not exist" containerID="5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.790310 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a"} err="failed to get container status \"5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a\": rpc error: code = NotFound desc = could not find container \"5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a\": container with ID starting with 5a1eb558e705ef8e04d3029e75069a9a50821329543d1fe91036da43a3b2812a not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.790343 5017 scope.go:117] "RemoveContainer" containerID="f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.804453 5017 scope.go:117] "RemoveContainer" containerID="a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.832162 5017 scope.go:117] "RemoveContainer" containerID="4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.851240 5017 scope.go:117] "RemoveContainer" containerID="f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.851789 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37\": container with ID starting with f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37 not found: ID does not exist" containerID="f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.851825 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37"} err="failed to get container status \"f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37\": rpc error: code = NotFound desc = could not find container \"f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37\": container with ID starting with f6d46c6a33a3505b53f209f42e17502705f15c272a899a6e6a39d557d7beda37 not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.851886 5017 scope.go:117] "RemoveContainer" containerID="a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.852362 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a\": container with ID starting with a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a not found: ID does not exist" containerID="a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.852387 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a"} err="failed to get container status \"a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a\": rpc error: code = NotFound desc = could not find container \"a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a\": container with ID starting with a911113f9cd77647fa411174b38a432e61247a5040ddd928ec0b1d05f95be57a not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.852406 5017 scope.go:117] "RemoveContainer" containerID="4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.852757 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072\": container with ID starting with 4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072 not found: ID does not exist" containerID="4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.852785 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072"} err="failed to get container status \"4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072\": rpc error: code = NotFound desc = could not find container \"4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072\": container with ID starting with 4d051237478e16dd0c73d03ea22ae429be0ecc2fec74acbeba1ecd2fc1046072 not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.852805 5017 scope.go:117] "RemoveContainer" containerID="2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.869767 5017 scope.go:117] "RemoveContainer" containerID="57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.884309 5017 scope.go:117] "RemoveContainer" containerID="7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.895660 5017 scope.go:117] "RemoveContainer" containerID="2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.896129 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301\": container with ID starting with 2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301 not found: ID does not exist" containerID="2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.896168 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301"} err="failed to get container status \"2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301\": rpc error: code = NotFound desc = could not find container \"2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301\": container with ID starting with 2cd22b3eea5fc258b0b900a9a83d32c32404ac37942ed4f023fb927ef9bfb301 not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.896200 5017 scope.go:117] "RemoveContainer" containerID="57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.896526 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108\": container with ID starting with 57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108 not found: ID does not exist" containerID="57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.896549 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108"} err="failed to get container status \"57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108\": rpc error: code = NotFound desc = could not find container \"57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108\": container with ID starting with 57cda62141b6acdb714618a2736cbb70b693148a728f3f7924614e720127c108 not found: ID does not exist" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.896562 5017 scope.go:117] "RemoveContainer" containerID="7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c" Jan 29 06:40:10 crc kubenswrapper[5017]: E0129 06:40:10.897252 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c\": container with ID starting with 7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c not found: ID does not exist" containerID="7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c" Jan 29 06:40:10 crc kubenswrapper[5017]: I0129 06:40:10.897401 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c"} err="failed to get container status \"7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c\": rpc error: code = NotFound desc = could not find container \"7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c\": container with ID starting with 7a49a7f353771517e8db8bc24d005720b0e7dcb915c7b93b3d84338c2ad6149c not found: ID does not exist" Jan 29 06:40:11 crc kubenswrapper[5017]: I0129 06:40:11.644734 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" event={"ID":"3fc9269a-d09b-426d-988d-05995e1d4014","Type":"ContainerStarted","Data":"68e8ad9dce895f3e1b12373dcf0e387544aa865ad392f25ae32e178e960887d6"} Jan 29 06:40:11 crc kubenswrapper[5017]: I0129 06:40:11.644772 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" event={"ID":"3fc9269a-d09b-426d-988d-05995e1d4014","Type":"ContainerStarted","Data":"d838dd3654b4367310fb1d17299944967e2e488ca1479ef1696daf1f05832a6f"} Jan 29 06:40:11 crc kubenswrapper[5017]: I0129 06:40:11.645193 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:11 crc kubenswrapper[5017]: I0129 06:40:11.649916 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" Jan 29 06:40:11 crc kubenswrapper[5017]: I0129 06:40:11.662803 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6hp88" podStartSLOduration=2.662780219 podStartE2EDuration="2.662780219s" podCreationTimestamp="2026-01-29 06:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:40:11.660708594 +0000 UTC m=+298.035156204" watchObservedRunningTime="2026-01-29 06:40:11.662780219 +0000 UTC m=+298.037227849" Jan 29 06:40:12 crc kubenswrapper[5017]: I0129 06:40:12.326250 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" path="/var/lib/kubelet/pods/402aa844-38d7-44aa-bfa8-8db490d3aa4b/volumes" Jan 29 06:40:12 crc kubenswrapper[5017]: I0129 06:40:12.327394 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" path="/var/lib/kubelet/pods/515e73a4-3f9f-40aa-bd4b-c4ac2d55f304/volumes" Jan 29 06:40:12 crc kubenswrapper[5017]: I0129 06:40:12.328175 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" path="/var/lib/kubelet/pods/95fabedb-25f2-43b3-a1dc-907c7e3ad4c2/volumes" Jan 29 06:40:12 crc kubenswrapper[5017]: I0129 06:40:12.329614 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" path="/var/lib/kubelet/pods/bb138e77-1a05-4fd5-9fd7-47843ef6aa0b/volumes" Jan 29 06:40:12 crc kubenswrapper[5017]: I0129 06:40:12.330232 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" path="/var/lib/kubelet/pods/d8f46fb7-929f-4d96-a5ca-4fc475b78342/volumes" Jan 29 06:40:14 crc kubenswrapper[5017]: I0129 06:40:14.097222 5017 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 06:40:56 crc kubenswrapper[5017]: I0129 06:40:56.539121 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:40:56 crc kubenswrapper[5017]: I0129 06:40:56.540088 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.132176 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wm4xz"] Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.132874 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" containerName="marketplace-operator" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.132887 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" containerName="marketplace-operator" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.132896 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerName="extract-utilities" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.132903 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerName="extract-utilities" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.132912 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerName="extract-content" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.132919 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerName="extract-content" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.132929 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerName="extract-utilities" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.132935 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerName="extract-utilities" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.132944 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.132969 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.132978 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerName="extract-utilities" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.132984 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerName="extract-utilities" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.132993 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerName="extract-content" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.132999 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerName="extract-content" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.133010 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133016 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.133023 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerName="extract-utilities" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133029 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerName="extract-utilities" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.133038 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133043 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.133049 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerName="extract-content" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133055 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerName="extract-content" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.133067 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133074 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: E0129 06:41:12.133083 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerName="extract-content" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133088 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerName="extract-content" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133175 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f46fb7-929f-4d96-a5ca-4fc475b78342" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133184 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="515e73a4-3f9f-40aa-bd4b-c4ac2d55f304" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133195 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="402aa844-38d7-44aa-bfa8-8db490d3aa4b" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133201 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fabedb-25f2-43b3-a1dc-907c7e3ad4c2" containerName="registry-server" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133209 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb138e77-1a05-4fd5-9fd7-47843ef6aa0b" containerName="marketplace-operator" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.133909 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.136617 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.152284 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wm4xz"] Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.231902 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604dcc3c-6617-4c60-9cf7-c6d75ed77584-utilities\") pod \"certified-operators-wm4xz\" (UID: \"604dcc3c-6617-4c60-9cf7-c6d75ed77584\") " pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.231996 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbctm\" (UniqueName: \"kubernetes.io/projected/604dcc3c-6617-4c60-9cf7-c6d75ed77584-kube-api-access-hbctm\") pod \"certified-operators-wm4xz\" (UID: \"604dcc3c-6617-4c60-9cf7-c6d75ed77584\") " pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.232080 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604dcc3c-6617-4c60-9cf7-c6d75ed77584-catalog-content\") pod \"certified-operators-wm4xz\" (UID: \"604dcc3c-6617-4c60-9cf7-c6d75ed77584\") " pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.333515 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604dcc3c-6617-4c60-9cf7-c6d75ed77584-utilities\") pod \"certified-operators-wm4xz\" (UID: \"604dcc3c-6617-4c60-9cf7-c6d75ed77584\") " pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.333589 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbctm\" (UniqueName: \"kubernetes.io/projected/604dcc3c-6617-4c60-9cf7-c6d75ed77584-kube-api-access-hbctm\") pod \"certified-operators-wm4xz\" (UID: \"604dcc3c-6617-4c60-9cf7-c6d75ed77584\") " pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.333634 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604dcc3c-6617-4c60-9cf7-c6d75ed77584-catalog-content\") pod \"certified-operators-wm4xz\" (UID: \"604dcc3c-6617-4c60-9cf7-c6d75ed77584\") " pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.333908 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k52x8"] Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.334016 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/604dcc3c-6617-4c60-9cf7-c6d75ed77584-utilities\") pod \"certified-operators-wm4xz\" (UID: \"604dcc3c-6617-4c60-9cf7-c6d75ed77584\") " pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.334114 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/604dcc3c-6617-4c60-9cf7-c6d75ed77584-catalog-content\") pod \"certified-operators-wm4xz\" (UID: \"604dcc3c-6617-4c60-9cf7-c6d75ed77584\") " pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.335144 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.338515 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.346073 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k52x8"] Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.368593 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbctm\" (UniqueName: \"kubernetes.io/projected/604dcc3c-6617-4c60-9cf7-c6d75ed77584-kube-api-access-hbctm\") pod \"certified-operators-wm4xz\" (UID: \"604dcc3c-6617-4c60-9cf7-c6d75ed77584\") " pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.435144 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d-utilities\") pod \"community-operators-k52x8\" (UID: \"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d\") " pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.435199 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjh5p\" (UniqueName: \"kubernetes.io/projected/6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d-kube-api-access-rjh5p\") pod \"community-operators-k52x8\" (UID: \"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d\") " pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.435320 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d-catalog-content\") pod \"community-operators-k52x8\" (UID: \"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d\") " pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.450100 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.536071 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d-utilities\") pod \"community-operators-k52x8\" (UID: \"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d\") " pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.536386 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjh5p\" (UniqueName: \"kubernetes.io/projected/6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d-kube-api-access-rjh5p\") pod \"community-operators-k52x8\" (UID: \"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d\") " pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.536443 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d-catalog-content\") pod \"community-operators-k52x8\" (UID: \"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d\") " pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.536910 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d-catalog-content\") pod \"community-operators-k52x8\" (UID: \"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d\") " pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.536946 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d-utilities\") pod \"community-operators-k52x8\" (UID: \"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d\") " pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.567237 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjh5p\" (UniqueName: \"kubernetes.io/projected/6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d-kube-api-access-rjh5p\") pod \"community-operators-k52x8\" (UID: \"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d\") " pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.650090 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:12 crc kubenswrapper[5017]: I0129 06:41:12.690764 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wm4xz"] Jan 29 06:41:13 crc kubenswrapper[5017]: I0129 06:41:13.034374 5017 generic.go:334] "Generic (PLEG): container finished" podID="604dcc3c-6617-4c60-9cf7-c6d75ed77584" containerID="04cbbe99c61c34d7e2a60439ff5254cbc0dfe8a276807a766a41c99a92f4559e" exitCode=0 Jan 29 06:41:13 crc kubenswrapper[5017]: I0129 06:41:13.034452 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4xz" event={"ID":"604dcc3c-6617-4c60-9cf7-c6d75ed77584","Type":"ContainerDied","Data":"04cbbe99c61c34d7e2a60439ff5254cbc0dfe8a276807a766a41c99a92f4559e"} Jan 29 06:41:13 crc kubenswrapper[5017]: I0129 06:41:13.034794 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4xz" event={"ID":"604dcc3c-6617-4c60-9cf7-c6d75ed77584","Type":"ContainerStarted","Data":"83901fa0c3cb104471d0d9194faa6e09241a982224d464591cfd1c5d6d060def"} Jan 29 06:41:13 crc kubenswrapper[5017]: I0129 06:41:13.108673 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k52x8"] Jan 29 06:41:13 crc kubenswrapper[5017]: W0129 06:41:13.108710 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6edd9c7c_c85c_4d56_9d3b_cff10dd5bb6d.slice/crio-70e5d5cbbe076d03d43a6999fb53978d5f522c3ac7c393b42a39e2605fcca2dc WatchSource:0}: Error finding container 70e5d5cbbe076d03d43a6999fb53978d5f522c3ac7c393b42a39e2605fcca2dc: Status 404 returned error can't find the container with id 70e5d5cbbe076d03d43a6999fb53978d5f522c3ac7c393b42a39e2605fcca2dc Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.044211 5017 generic.go:334] "Generic (PLEG): container finished" podID="6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d" containerID="3b33535ae50a7bbabfa6deca04bcbe1f1a389c4db271952c2f8bb6147184ff0f" exitCode=0 Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.044614 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k52x8" event={"ID":"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d","Type":"ContainerDied","Data":"3b33535ae50a7bbabfa6deca04bcbe1f1a389c4db271952c2f8bb6147184ff0f"} Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.044644 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k52x8" event={"ID":"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d","Type":"ContainerStarted","Data":"70e5d5cbbe076d03d43a6999fb53978d5f522c3ac7c393b42a39e2605fcca2dc"} Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.728736 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9vqc2"] Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.729994 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.732363 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.739796 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vqc2"] Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.772938 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj45d\" (UniqueName: \"kubernetes.io/projected/a60ce73c-bc91-4900-8bd3-4abf463391bc-kube-api-access-rj45d\") pod \"redhat-marketplace-9vqc2\" (UID: \"a60ce73c-bc91-4900-8bd3-4abf463391bc\") " pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.773053 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60ce73c-bc91-4900-8bd3-4abf463391bc-utilities\") pod \"redhat-marketplace-9vqc2\" (UID: \"a60ce73c-bc91-4900-8bd3-4abf463391bc\") " pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.773078 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60ce73c-bc91-4900-8bd3-4abf463391bc-catalog-content\") pod \"redhat-marketplace-9vqc2\" (UID: \"a60ce73c-bc91-4900-8bd3-4abf463391bc\") " pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.874709 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60ce73c-bc91-4900-8bd3-4abf463391bc-utilities\") pod \"redhat-marketplace-9vqc2\" (UID: \"a60ce73c-bc91-4900-8bd3-4abf463391bc\") " pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.874752 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60ce73c-bc91-4900-8bd3-4abf463391bc-catalog-content\") pod \"redhat-marketplace-9vqc2\" (UID: \"a60ce73c-bc91-4900-8bd3-4abf463391bc\") " pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.874815 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj45d\" (UniqueName: \"kubernetes.io/projected/a60ce73c-bc91-4900-8bd3-4abf463391bc-kube-api-access-rj45d\") pod \"redhat-marketplace-9vqc2\" (UID: \"a60ce73c-bc91-4900-8bd3-4abf463391bc\") " pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.875353 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60ce73c-bc91-4900-8bd3-4abf463391bc-catalog-content\") pod \"redhat-marketplace-9vqc2\" (UID: \"a60ce73c-bc91-4900-8bd3-4abf463391bc\") " pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.875683 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60ce73c-bc91-4900-8bd3-4abf463391bc-utilities\") pod \"redhat-marketplace-9vqc2\" (UID: \"a60ce73c-bc91-4900-8bd3-4abf463391bc\") " pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.905933 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj45d\" (UniqueName: \"kubernetes.io/projected/a60ce73c-bc91-4900-8bd3-4abf463391bc-kube-api-access-rj45d\") pod \"redhat-marketplace-9vqc2\" (UID: \"a60ce73c-bc91-4900-8bd3-4abf463391bc\") " pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.926359 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hk4hv"] Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.928000 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.930003 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.939003 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hk4hv"] Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.976206 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005e775d-7652-4282-af00-35a890d012a2-utilities\") pod \"redhat-operators-hk4hv\" (UID: \"005e775d-7652-4282-af00-35a890d012a2\") " pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.976256 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtj64\" (UniqueName: \"kubernetes.io/projected/005e775d-7652-4282-af00-35a890d012a2-kube-api-access-xtj64\") pod \"redhat-operators-hk4hv\" (UID: \"005e775d-7652-4282-af00-35a890d012a2\") " pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:14 crc kubenswrapper[5017]: I0129 06:41:14.976304 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005e775d-7652-4282-af00-35a890d012a2-catalog-content\") pod \"redhat-operators-hk4hv\" (UID: \"005e775d-7652-4282-af00-35a890d012a2\") " pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.045181 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.053707 5017 generic.go:334] "Generic (PLEG): container finished" podID="6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d" containerID="ba2963f322bba9101b3964dc3e2c02236860569780ce80360592e8a927d988aa" exitCode=0 Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.053806 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k52x8" event={"ID":"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d","Type":"ContainerDied","Data":"ba2963f322bba9101b3964dc3e2c02236860569780ce80360592e8a927d988aa"} Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.055888 5017 generic.go:334] "Generic (PLEG): container finished" podID="604dcc3c-6617-4c60-9cf7-c6d75ed77584" containerID="e54bf2e9fbe2655a10f3eb4653280a5a27d079fc5322f26214bbc0cbad070a82" exitCode=0 Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.055933 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4xz" event={"ID":"604dcc3c-6617-4c60-9cf7-c6d75ed77584","Type":"ContainerDied","Data":"e54bf2e9fbe2655a10f3eb4653280a5a27d079fc5322f26214bbc0cbad070a82"} Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.077382 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005e775d-7652-4282-af00-35a890d012a2-utilities\") pod \"redhat-operators-hk4hv\" (UID: \"005e775d-7652-4282-af00-35a890d012a2\") " pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.077431 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtj64\" (UniqueName: \"kubernetes.io/projected/005e775d-7652-4282-af00-35a890d012a2-kube-api-access-xtj64\") pod \"redhat-operators-hk4hv\" (UID: \"005e775d-7652-4282-af00-35a890d012a2\") " pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.077473 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005e775d-7652-4282-af00-35a890d012a2-catalog-content\") pod \"redhat-operators-hk4hv\" (UID: \"005e775d-7652-4282-af00-35a890d012a2\") " pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.077868 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005e775d-7652-4282-af00-35a890d012a2-utilities\") pod \"redhat-operators-hk4hv\" (UID: \"005e775d-7652-4282-af00-35a890d012a2\") " pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.078192 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005e775d-7652-4282-af00-35a890d012a2-catalog-content\") pod \"redhat-operators-hk4hv\" (UID: \"005e775d-7652-4282-af00-35a890d012a2\") " pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.093789 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtj64\" (UniqueName: \"kubernetes.io/projected/005e775d-7652-4282-af00-35a890d012a2-kube-api-access-xtj64\") pod \"redhat-operators-hk4hv\" (UID: \"005e775d-7652-4282-af00-35a890d012a2\") " pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.240674 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vqc2"] Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.250551 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:15 crc kubenswrapper[5017]: I0129 06:41:15.439348 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hk4hv"] Jan 29 06:41:15 crc kubenswrapper[5017]: W0129 06:41:15.446869 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005e775d_7652_4282_af00_35a890d012a2.slice/crio-d69d9c61b60053d29b5392f4e275f01677ddcdfd62199b939c9cd684c5612be1 WatchSource:0}: Error finding container d69d9c61b60053d29b5392f4e275f01677ddcdfd62199b939c9cd684c5612be1: Status 404 returned error can't find the container with id d69d9c61b60053d29b5392f4e275f01677ddcdfd62199b939c9cd684c5612be1 Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.062940 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k52x8" event={"ID":"6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d","Type":"ContainerStarted","Data":"09ba79b5ff7495a25ab0fd567c34a2f4e58d7f8bd8a6cfcd079e6733a416c287"} Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.065908 5017 generic.go:334] "Generic (PLEG): container finished" podID="a60ce73c-bc91-4900-8bd3-4abf463391bc" containerID="7aa27b386f0077aa5d2c52252f7e97a4fa53aa4f944c9e1c04dd1e2b6bc2dc95" exitCode=0 Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.065979 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vqc2" event={"ID":"a60ce73c-bc91-4900-8bd3-4abf463391bc","Type":"ContainerDied","Data":"7aa27b386f0077aa5d2c52252f7e97a4fa53aa4f944c9e1c04dd1e2b6bc2dc95"} Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.065996 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vqc2" event={"ID":"a60ce73c-bc91-4900-8bd3-4abf463391bc","Type":"ContainerStarted","Data":"50cc25d3ca54dd38dc6f7f07deaf3960d455211ffd2593e8984681c4a49537bf"} Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.069537 5017 generic.go:334] "Generic (PLEG): container finished" podID="005e775d-7652-4282-af00-35a890d012a2" containerID="10c04b540c3ef35e55418d7b2a81121bdfe690ce4a3d73bdbb9aab768692b22b" exitCode=0 Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.069615 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hk4hv" event={"ID":"005e775d-7652-4282-af00-35a890d012a2","Type":"ContainerDied","Data":"10c04b540c3ef35e55418d7b2a81121bdfe690ce4a3d73bdbb9aab768692b22b"} Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.069791 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hk4hv" event={"ID":"005e775d-7652-4282-af00-35a890d012a2","Type":"ContainerStarted","Data":"d69d9c61b60053d29b5392f4e275f01677ddcdfd62199b939c9cd684c5612be1"} Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.072600 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm4xz" event={"ID":"604dcc3c-6617-4c60-9cf7-c6d75ed77584","Type":"ContainerStarted","Data":"21063e779dc5631f5ca1d5e5a01c4de29373c14ce82a8aa6650748798b5cc297"} Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.088286 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k52x8" podStartSLOduration=2.6149587309999998 podStartE2EDuration="4.088258694s" podCreationTimestamp="2026-01-29 06:41:12 +0000 UTC" firstStartedPulling="2026-01-29 06:41:14.046276996 +0000 UTC m=+360.420724606" lastFinishedPulling="2026-01-29 06:41:15.519576969 +0000 UTC m=+361.894024569" observedRunningTime="2026-01-29 06:41:16.084462161 +0000 UTC m=+362.458909791" watchObservedRunningTime="2026-01-29 06:41:16.088258694 +0000 UTC m=+362.462706304" Jan 29 06:41:16 crc kubenswrapper[5017]: I0129 06:41:16.109264 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wm4xz" podStartSLOduration=1.695281547 podStartE2EDuration="4.109228799s" podCreationTimestamp="2026-01-29 06:41:12 +0000 UTC" firstStartedPulling="2026-01-29 06:41:13.035838872 +0000 UTC m=+359.410286482" lastFinishedPulling="2026-01-29 06:41:15.449786124 +0000 UTC m=+361.824233734" observedRunningTime="2026-01-29 06:41:16.104018711 +0000 UTC m=+362.478466341" watchObservedRunningTime="2026-01-29 06:41:16.109228799 +0000 UTC m=+362.483676419" Jan 29 06:41:17 crc kubenswrapper[5017]: I0129 06:41:17.080085 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vqc2" event={"ID":"a60ce73c-bc91-4900-8bd3-4abf463391bc","Type":"ContainerStarted","Data":"5eac32acad2e24e42fac2f232a50650216e4b6d1414e6aa2b8d6bf54341fb755"} Jan 29 06:41:17 crc kubenswrapper[5017]: I0129 06:41:17.083244 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hk4hv" event={"ID":"005e775d-7652-4282-af00-35a890d012a2","Type":"ContainerStarted","Data":"91ac210d896f0c8876257cec3f1661d0c3ece594b488ab8f2c1b510b41e91d49"} Jan 29 06:41:18 crc kubenswrapper[5017]: I0129 06:41:18.090350 5017 generic.go:334] "Generic (PLEG): container finished" podID="a60ce73c-bc91-4900-8bd3-4abf463391bc" containerID="5eac32acad2e24e42fac2f232a50650216e4b6d1414e6aa2b8d6bf54341fb755" exitCode=0 Jan 29 06:41:18 crc kubenswrapper[5017]: I0129 06:41:18.090410 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vqc2" event={"ID":"a60ce73c-bc91-4900-8bd3-4abf463391bc","Type":"ContainerDied","Data":"5eac32acad2e24e42fac2f232a50650216e4b6d1414e6aa2b8d6bf54341fb755"} Jan 29 06:41:18 crc kubenswrapper[5017]: I0129 06:41:18.093419 5017 generic.go:334] "Generic (PLEG): container finished" podID="005e775d-7652-4282-af00-35a890d012a2" containerID="91ac210d896f0c8876257cec3f1661d0c3ece594b488ab8f2c1b510b41e91d49" exitCode=0 Jan 29 06:41:18 crc kubenswrapper[5017]: I0129 06:41:18.093457 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hk4hv" event={"ID":"005e775d-7652-4282-af00-35a890d012a2","Type":"ContainerDied","Data":"91ac210d896f0c8876257cec3f1661d0c3ece594b488ab8f2c1b510b41e91d49"} Jan 29 06:41:18 crc kubenswrapper[5017]: E0129 06:41:18.244181 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604dcc3c_6617_4c60_9cf7_c6d75ed77584.slice/crio-e54bf2e9fbe2655a10f3eb4653280a5a27d079fc5322f26214bbc0cbad070a82.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:41:19 crc kubenswrapper[5017]: I0129 06:41:19.102763 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hk4hv" event={"ID":"005e775d-7652-4282-af00-35a890d012a2","Type":"ContainerStarted","Data":"9be33a2ede2cc63c2773b97d91c8166740332fa49a98a59f7a93e704caa19138"} Jan 29 06:41:19 crc kubenswrapper[5017]: I0129 06:41:19.107373 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vqc2" event={"ID":"a60ce73c-bc91-4900-8bd3-4abf463391bc","Type":"ContainerStarted","Data":"e03e6a25e54a6e0bf5cc494c1e1d3bbafd3fb7725cef6c2cc67eaa9d0a837697"} Jan 29 06:41:19 crc kubenswrapper[5017]: I0129 06:41:19.152444 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hk4hv" podStartSLOduration=2.7322362289999997 podStartE2EDuration="5.152409715s" podCreationTimestamp="2026-01-29 06:41:14 +0000 UTC" firstStartedPulling="2026-01-29 06:41:16.070881727 +0000 UTC m=+362.445329337" lastFinishedPulling="2026-01-29 06:41:18.491055203 +0000 UTC m=+364.865502823" observedRunningTime="2026-01-29 06:41:19.128821366 +0000 UTC m=+365.503268976" watchObservedRunningTime="2026-01-29 06:41:19.152409715 +0000 UTC m=+365.526857325" Jan 29 06:41:19 crc kubenswrapper[5017]: I0129 06:41:19.153603 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9vqc2" podStartSLOduration=2.7152597419999998 podStartE2EDuration="5.153593014s" podCreationTimestamp="2026-01-29 06:41:14 +0000 UTC" firstStartedPulling="2026-01-29 06:41:16.067505294 +0000 UTC m=+362.441952914" lastFinishedPulling="2026-01-29 06:41:18.505838566 +0000 UTC m=+364.880286186" observedRunningTime="2026-01-29 06:41:19.149540045 +0000 UTC m=+365.523987665" watchObservedRunningTime="2026-01-29 06:41:19.153593014 +0000 UTC m=+365.528040614" Jan 29 06:41:22 crc kubenswrapper[5017]: I0129 06:41:22.450854 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:22 crc kubenswrapper[5017]: I0129 06:41:22.451364 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:22 crc kubenswrapper[5017]: I0129 06:41:22.494222 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:22 crc kubenswrapper[5017]: I0129 06:41:22.650626 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:22 crc kubenswrapper[5017]: I0129 06:41:22.650692 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:22 crc kubenswrapper[5017]: I0129 06:41:22.687950 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:23 crc kubenswrapper[5017]: I0129 06:41:23.170978 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k52x8" Jan 29 06:41:23 crc kubenswrapper[5017]: I0129 06:41:23.192021 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wm4xz" Jan 29 06:41:25 crc kubenswrapper[5017]: I0129 06:41:25.046210 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:25 crc kubenswrapper[5017]: I0129 06:41:25.046925 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:25 crc kubenswrapper[5017]: I0129 06:41:25.105894 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:25 crc kubenswrapper[5017]: I0129 06:41:25.175499 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9vqc2" Jan 29 06:41:25 crc kubenswrapper[5017]: I0129 06:41:25.251599 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:25 crc kubenswrapper[5017]: I0129 06:41:25.251661 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:25 crc kubenswrapper[5017]: I0129 06:41:25.308916 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:26 crc kubenswrapper[5017]: I0129 06:41:26.189916 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hk4hv" Jan 29 06:41:26 crc kubenswrapper[5017]: I0129 06:41:26.539950 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:41:26 crc kubenswrapper[5017]: I0129 06:41:26.540077 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.067872 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-66t44"] Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.068900 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.084327 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-66t44"] Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.151242 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/147e841e-ef71-4143-98df-89bf9e81239f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.151324 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/147e841e-ef71-4143-98df-89bf9e81239f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.151367 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.151414 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/147e841e-ef71-4143-98df-89bf9e81239f-trusted-ca\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.151449 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/147e841e-ef71-4143-98df-89bf9e81239f-registry-tls\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.151470 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pgph\" (UniqueName: \"kubernetes.io/projected/147e841e-ef71-4143-98df-89bf9e81239f-kube-api-access-2pgph\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.151499 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/147e841e-ef71-4143-98df-89bf9e81239f-bound-sa-token\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.151528 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/147e841e-ef71-4143-98df-89bf9e81239f-registry-certificates\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.178598 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.252713 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/147e841e-ef71-4143-98df-89bf9e81239f-trusted-ca\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.252773 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/147e841e-ef71-4143-98df-89bf9e81239f-registry-tls\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.252803 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pgph\" (UniqueName: \"kubernetes.io/projected/147e841e-ef71-4143-98df-89bf9e81239f-kube-api-access-2pgph\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.252831 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/147e841e-ef71-4143-98df-89bf9e81239f-bound-sa-token\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.252859 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/147e841e-ef71-4143-98df-89bf9e81239f-registry-certificates\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.252909 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/147e841e-ef71-4143-98df-89bf9e81239f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.252933 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/147e841e-ef71-4143-98df-89bf9e81239f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.253654 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/147e841e-ef71-4143-98df-89bf9e81239f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.254592 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/147e841e-ef71-4143-98df-89bf9e81239f-registry-certificates\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.254669 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/147e841e-ef71-4143-98df-89bf9e81239f-trusted-ca\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.259368 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/147e841e-ef71-4143-98df-89bf9e81239f-registry-tls\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.259375 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/147e841e-ef71-4143-98df-89bf9e81239f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.271507 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/147e841e-ef71-4143-98df-89bf9e81239f-bound-sa-token\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.271671 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pgph\" (UniqueName: \"kubernetes.io/projected/147e841e-ef71-4143-98df-89bf9e81239f-kube-api-access-2pgph\") pod \"image-registry-66df7c8f76-66t44\" (UID: \"147e841e-ef71-4143-98df-89bf9e81239f\") " pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.386866 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:27 crc kubenswrapper[5017]: I0129 06:41:27.576665 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-66t44"] Jan 29 06:41:27 crc kubenswrapper[5017]: W0129 06:41:27.581024 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod147e841e_ef71_4143_98df_89bf9e81239f.slice/crio-4ed563969e2fb2ef18165d41ededa5eccc7d20f93f40f99c2c943e069bc70bf1 WatchSource:0}: Error finding container 4ed563969e2fb2ef18165d41ededa5eccc7d20f93f40f99c2c943e069bc70bf1: Status 404 returned error can't find the container with id 4ed563969e2fb2ef18165d41ededa5eccc7d20f93f40f99c2c943e069bc70bf1 Jan 29 06:41:28 crc kubenswrapper[5017]: I0129 06:41:28.160358 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-66t44" event={"ID":"147e841e-ef71-4143-98df-89bf9e81239f","Type":"ContainerStarted","Data":"77b37999dce8202f85644317cbc977fd2cf1fd54242df2780c168fe7c0f31f39"} Jan 29 06:41:28 crc kubenswrapper[5017]: I0129 06:41:28.160409 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-66t44" event={"ID":"147e841e-ef71-4143-98df-89bf9e81239f","Type":"ContainerStarted","Data":"4ed563969e2fb2ef18165d41ededa5eccc7d20f93f40f99c2c943e069bc70bf1"} Jan 29 06:41:28 crc kubenswrapper[5017]: I0129 06:41:28.160506 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:28 crc kubenswrapper[5017]: E0129 06:41:28.403139 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604dcc3c_6617_4c60_9cf7_c6d75ed77584.slice/crio-e54bf2e9fbe2655a10f3eb4653280a5a27d079fc5322f26214bbc0cbad070a82.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:41:38 crc kubenswrapper[5017]: E0129 06:41:38.581568 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604dcc3c_6617_4c60_9cf7_c6d75ed77584.slice/crio-e54bf2e9fbe2655a10f3eb4653280a5a27d079fc5322f26214bbc0cbad070a82.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:41:47 crc kubenswrapper[5017]: I0129 06:41:47.401891 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-66t44" Jan 29 06:41:47 crc kubenswrapper[5017]: I0129 06:41:47.442150 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-66t44" podStartSLOduration=20.442114782 podStartE2EDuration="20.442114782s" podCreationTimestamp="2026-01-29 06:41:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:41:28.198497234 +0000 UTC m=+374.572944884" watchObservedRunningTime="2026-01-29 06:41:47.442114782 +0000 UTC m=+393.816562392" Jan 29 06:41:47 crc kubenswrapper[5017]: I0129 06:41:47.501787 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sckkt"] Jan 29 06:41:48 crc kubenswrapper[5017]: E0129 06:41:48.751221 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604dcc3c_6617_4c60_9cf7_c6d75ed77584.slice/crio-e54bf2e9fbe2655a10f3eb4653280a5a27d079fc5322f26214bbc0cbad070a82.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:41:56 crc kubenswrapper[5017]: I0129 06:41:56.539272 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:41:56 crc kubenswrapper[5017]: I0129 06:41:56.540270 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:41:56 crc kubenswrapper[5017]: I0129 06:41:56.540346 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:41:56 crc kubenswrapper[5017]: I0129 06:41:56.541237 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3028d53bd201fdd844bd103a5d0e85a942fa53d5fbdd5f5e360ee9ecec025248"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:41:56 crc kubenswrapper[5017]: I0129 06:41:56.541321 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://3028d53bd201fdd844bd103a5d0e85a942fa53d5fbdd5f5e360ee9ecec025248" gracePeriod=600 Jan 29 06:41:57 crc kubenswrapper[5017]: I0129 06:41:57.392002 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="3028d53bd201fdd844bd103a5d0e85a942fa53d5fbdd5f5e360ee9ecec025248" exitCode=0 Jan 29 06:41:57 crc kubenswrapper[5017]: I0129 06:41:57.392100 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"3028d53bd201fdd844bd103a5d0e85a942fa53d5fbdd5f5e360ee9ecec025248"} Jan 29 06:41:57 crc kubenswrapper[5017]: I0129 06:41:57.392364 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"dabff3371a6e63297fd61484134b61895e9de875b2decf10108c281561fa90e3"} Jan 29 06:41:57 crc kubenswrapper[5017]: I0129 06:41:57.392393 5017 scope.go:117] "RemoveContainer" containerID="ddd299f15e4338652a37e1e3f09560a50e6c59d7bf29e827462d060a56294438" Jan 29 06:41:58 crc kubenswrapper[5017]: E0129 06:41:58.876068 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604dcc3c_6617_4c60_9cf7_c6d75ed77584.slice/crio-e54bf2e9fbe2655a10f3eb4653280a5a27d079fc5322f26214bbc0cbad070a82.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:42:09 crc kubenswrapper[5017]: E0129 06:42:09.051865 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604dcc3c_6617_4c60_9cf7_c6d75ed77584.slice/crio-e54bf2e9fbe2655a10f3eb4653280a5a27d079fc5322f26214bbc0cbad070a82.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.559403 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" podUID="4d05265b-0d73-42c3-be6a-12198c0109de" containerName="registry" containerID="cri-o://b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33" gracePeriod=30 Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.944486 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.983709 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-registry-tls\") pod \"4d05265b-0d73-42c3-be6a-12198c0109de\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.983778 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tcmb\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-kube-api-access-7tcmb\") pod \"4d05265b-0d73-42c3-be6a-12198c0109de\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.983820 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-registry-certificates\") pod \"4d05265b-0d73-42c3-be6a-12198c0109de\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.983866 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d05265b-0d73-42c3-be6a-12198c0109de-installation-pull-secrets\") pod \"4d05265b-0d73-42c3-be6a-12198c0109de\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.984254 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4d05265b-0d73-42c3-be6a-12198c0109de\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.984309 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-bound-sa-token\") pod \"4d05265b-0d73-42c3-be6a-12198c0109de\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.984340 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-trusted-ca\") pod \"4d05265b-0d73-42c3-be6a-12198c0109de\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.984383 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d05265b-0d73-42c3-be6a-12198c0109de-ca-trust-extracted\") pod \"4d05265b-0d73-42c3-be6a-12198c0109de\" (UID: \"4d05265b-0d73-42c3-be6a-12198c0109de\") " Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.985445 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4d05265b-0d73-42c3-be6a-12198c0109de" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.985685 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4d05265b-0d73-42c3-be6a-12198c0109de" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.999160 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-kube-api-access-7tcmb" (OuterVolumeSpecName: "kube-api-access-7tcmb") pod "4d05265b-0d73-42c3-be6a-12198c0109de" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de"). InnerVolumeSpecName "kube-api-access-7tcmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:42:12 crc kubenswrapper[5017]: I0129 06:42:12.999357 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4d05265b-0d73-42c3-be6a-12198c0109de" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.000576 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4d05265b-0d73-42c3-be6a-12198c0109de" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.003638 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d05265b-0d73-42c3-be6a-12198c0109de-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4d05265b-0d73-42c3-be6a-12198c0109de" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.003883 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4d05265b-0d73-42c3-be6a-12198c0109de" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.022041 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d05265b-0d73-42c3-be6a-12198c0109de-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4d05265b-0d73-42c3-be6a-12198c0109de" (UID: "4d05265b-0d73-42c3-be6a-12198c0109de"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.086288 5017 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.086339 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tcmb\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-kube-api-access-7tcmb\") on node \"crc\" DevicePath \"\"" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.086353 5017 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.086363 5017 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d05265b-0d73-42c3-be6a-12198c0109de-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.086374 5017 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d05265b-0d73-42c3-be6a-12198c0109de-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.086384 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d05265b-0d73-42c3-be6a-12198c0109de-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.086393 5017 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d05265b-0d73-42c3-be6a-12198c0109de-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.521856 5017 generic.go:334] "Generic (PLEG): container finished" podID="4d05265b-0d73-42c3-be6a-12198c0109de" containerID="b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33" exitCode=0 Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.521936 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" event={"ID":"4d05265b-0d73-42c3-be6a-12198c0109de","Type":"ContainerDied","Data":"b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33"} Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.521994 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.522019 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sckkt" event={"ID":"4d05265b-0d73-42c3-be6a-12198c0109de","Type":"ContainerDied","Data":"609dcd668a95023a89412529d38c94b828333ec0e0c480f12e693c3ad9d3cd04"} Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.522057 5017 scope.go:117] "RemoveContainer" containerID="b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.549375 5017 scope.go:117] "RemoveContainer" containerID="b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33" Jan 29 06:42:13 crc kubenswrapper[5017]: E0129 06:42:13.550344 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33\": container with ID starting with b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33 not found: ID does not exist" containerID="b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.550411 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33"} err="failed to get container status \"b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33\": rpc error: code = NotFound desc = could not find container \"b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33\": container with ID starting with b9a253929f61c27e435066e5143807f3c89fda56df16c74dca7ded458da29d33 not found: ID does not exist" Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.575891 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sckkt"] Jan 29 06:42:13 crc kubenswrapper[5017]: I0129 06:42:13.583916 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sckkt"] Jan 29 06:42:14 crc kubenswrapper[5017]: I0129 06:42:14.345941 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d05265b-0d73-42c3-be6a-12198c0109de" path="/var/lib/kubelet/pods/4d05265b-0d73-42c3-be6a-12198c0109de/volumes" Jan 29 06:43:56 crc kubenswrapper[5017]: I0129 06:43:56.539568 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:43:56 crc kubenswrapper[5017]: I0129 06:43:56.540460 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:44:26 crc kubenswrapper[5017]: I0129 06:44:26.539783 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:44:26 crc kubenswrapper[5017]: I0129 06:44:26.542351 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:44:56 crc kubenswrapper[5017]: I0129 06:44:56.540102 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:44:56 crc kubenswrapper[5017]: I0129 06:44:56.541032 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:44:56 crc kubenswrapper[5017]: I0129 06:44:56.541110 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:44:56 crc kubenswrapper[5017]: I0129 06:44:56.542228 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dabff3371a6e63297fd61484134b61895e9de875b2decf10108c281561fa90e3"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:44:56 crc kubenswrapper[5017]: I0129 06:44:56.542335 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://dabff3371a6e63297fd61484134b61895e9de875b2decf10108c281561fa90e3" gracePeriod=600 Jan 29 06:44:56 crc kubenswrapper[5017]: I0129 06:44:56.870625 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="dabff3371a6e63297fd61484134b61895e9de875b2decf10108c281561fa90e3" exitCode=0 Jan 29 06:44:56 crc kubenswrapper[5017]: I0129 06:44:56.870719 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"dabff3371a6e63297fd61484134b61895e9de875b2decf10108c281561fa90e3"} Jan 29 06:44:56 crc kubenswrapper[5017]: I0129 06:44:56.871374 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"b2452b80368892b3776d55e2c528464f9c2a090264bafafd3c6ec1fc4c343226"} Jan 29 06:44:56 crc kubenswrapper[5017]: I0129 06:44:56.871418 5017 scope.go:117] "RemoveContainer" containerID="3028d53bd201fdd844bd103a5d0e85a942fa53d5fbdd5f5e360ee9ecec025248" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.203293 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc"] Jan 29 06:45:00 crc kubenswrapper[5017]: E0129 06:45:00.204076 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d05265b-0d73-42c3-be6a-12198c0109de" containerName="registry" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.204097 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d05265b-0d73-42c3-be6a-12198c0109de" containerName="registry" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.204207 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d05265b-0d73-42c3-be6a-12198c0109de" containerName="registry" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.204694 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.208164 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.209028 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.223160 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc"] Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.368285 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb349dfb-25a6-4a65-b09e-c237a5369ea2-secret-volume\") pod \"collect-profiles-29494485-pqldc\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.368496 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vzl\" (UniqueName: \"kubernetes.io/projected/cb349dfb-25a6-4a65-b09e-c237a5369ea2-kube-api-access-n2vzl\") pod \"collect-profiles-29494485-pqldc\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.368551 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb349dfb-25a6-4a65-b09e-c237a5369ea2-config-volume\") pod \"collect-profiles-29494485-pqldc\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.469446 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb349dfb-25a6-4a65-b09e-c237a5369ea2-secret-volume\") pod \"collect-profiles-29494485-pqldc\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.469543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vzl\" (UniqueName: \"kubernetes.io/projected/cb349dfb-25a6-4a65-b09e-c237a5369ea2-kube-api-access-n2vzl\") pod \"collect-profiles-29494485-pqldc\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.469582 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb349dfb-25a6-4a65-b09e-c237a5369ea2-config-volume\") pod \"collect-profiles-29494485-pqldc\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.470689 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb349dfb-25a6-4a65-b09e-c237a5369ea2-config-volume\") pod \"collect-profiles-29494485-pqldc\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.483593 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb349dfb-25a6-4a65-b09e-c237a5369ea2-secret-volume\") pod \"collect-profiles-29494485-pqldc\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.503297 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vzl\" (UniqueName: \"kubernetes.io/projected/cb349dfb-25a6-4a65-b09e-c237a5369ea2-kube-api-access-n2vzl\") pod \"collect-profiles-29494485-pqldc\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.532722 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.798401 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc"] Jan 29 06:45:00 crc kubenswrapper[5017]: W0129 06:45:00.812284 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb349dfb_25a6_4a65_b09e_c237a5369ea2.slice/crio-d723f53f52bcc28884fda913f0487197f953c93b0efceb4f39c920aeb1601d60 WatchSource:0}: Error finding container d723f53f52bcc28884fda913f0487197f953c93b0efceb4f39c920aeb1601d60: Status 404 returned error can't find the container with id d723f53f52bcc28884fda913f0487197f953c93b0efceb4f39c920aeb1601d60 Jan 29 06:45:00 crc kubenswrapper[5017]: I0129 06:45:00.909645 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" event={"ID":"cb349dfb-25a6-4a65-b09e-c237a5369ea2","Type":"ContainerStarted","Data":"d723f53f52bcc28884fda913f0487197f953c93b0efceb4f39c920aeb1601d60"} Jan 29 06:45:01 crc kubenswrapper[5017]: I0129 06:45:01.916791 5017 generic.go:334] "Generic (PLEG): container finished" podID="cb349dfb-25a6-4a65-b09e-c237a5369ea2" containerID="cf0045d22bb998156763c4ec75b918062dddf28012b2e870b5e5a811dd76bb2b" exitCode=0 Jan 29 06:45:01 crc kubenswrapper[5017]: I0129 06:45:01.917272 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" event={"ID":"cb349dfb-25a6-4a65-b09e-c237a5369ea2","Type":"ContainerDied","Data":"cf0045d22bb998156763c4ec75b918062dddf28012b2e870b5e5a811dd76bb2b"} Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.235231 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.411982 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb349dfb-25a6-4a65-b09e-c237a5369ea2-secret-volume\") pod \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.412216 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2vzl\" (UniqueName: \"kubernetes.io/projected/cb349dfb-25a6-4a65-b09e-c237a5369ea2-kube-api-access-n2vzl\") pod \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.412709 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb349dfb-25a6-4a65-b09e-c237a5369ea2-config-volume\") pod \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\" (UID: \"cb349dfb-25a6-4a65-b09e-c237a5369ea2\") " Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.413774 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb349dfb-25a6-4a65-b09e-c237a5369ea2-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb349dfb-25a6-4a65-b09e-c237a5369ea2" (UID: "cb349dfb-25a6-4a65-b09e-c237a5369ea2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.420816 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb349dfb-25a6-4a65-b09e-c237a5369ea2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb349dfb-25a6-4a65-b09e-c237a5369ea2" (UID: "cb349dfb-25a6-4a65-b09e-c237a5369ea2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.422902 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb349dfb-25a6-4a65-b09e-c237a5369ea2-kube-api-access-n2vzl" (OuterVolumeSpecName: "kube-api-access-n2vzl") pod "cb349dfb-25a6-4a65-b09e-c237a5369ea2" (UID: "cb349dfb-25a6-4a65-b09e-c237a5369ea2"). InnerVolumeSpecName "kube-api-access-n2vzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.514048 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2vzl\" (UniqueName: \"kubernetes.io/projected/cb349dfb-25a6-4a65-b09e-c237a5369ea2-kube-api-access-n2vzl\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.514085 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb349dfb-25a6-4a65-b09e-c237a5369ea2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.514095 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb349dfb-25a6-4a65-b09e-c237a5369ea2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.931153 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" event={"ID":"cb349dfb-25a6-4a65-b09e-c237a5369ea2","Type":"ContainerDied","Data":"d723f53f52bcc28884fda913f0487197f953c93b0efceb4f39c920aeb1601d60"} Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.931202 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d723f53f52bcc28884fda913f0487197f953c93b0efceb4f39c920aeb1601d60" Jan 29 06:45:03 crc kubenswrapper[5017]: I0129 06:45:03.931223 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc" Jan 29 06:46:10 crc kubenswrapper[5017]: I0129 06:46:10.782429 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wqgmk"] Jan 29 06:46:10 crc kubenswrapper[5017]: I0129 06:46:10.783628 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovn-controller" containerID="cri-o://d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d" gracePeriod=30 Jan 29 06:46:10 crc kubenswrapper[5017]: I0129 06:46:10.783815 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71" gracePeriod=30 Jan 29 06:46:10 crc kubenswrapper[5017]: I0129 06:46:10.783756 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="northd" containerID="cri-o://fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835" gracePeriod=30 Jan 29 06:46:10 crc kubenswrapper[5017]: I0129 06:46:10.783883 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kube-rbac-proxy-node" containerID="cri-o://177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072" gracePeriod=30 Jan 29 06:46:10 crc kubenswrapper[5017]: I0129 06:46:10.783833 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="nbdb" containerID="cri-o://a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18" gracePeriod=30 Jan 29 06:46:10 crc kubenswrapper[5017]: I0129 06:46:10.783912 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovn-acl-logging" containerID="cri-o://749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e" gracePeriod=30 Jan 29 06:46:10 crc kubenswrapper[5017]: I0129 06:46:10.783831 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="sbdb" containerID="cri-o://f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646" gracePeriod=30 Jan 29 06:46:10 crc kubenswrapper[5017]: I0129 06:46:10.826946 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" containerID="cri-o://a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1" gracePeriod=30 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.244102 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/3.log" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.247575 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovn-acl-logging/0.log" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.248188 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovn-controller/0.log" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.248844 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305490 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xsxkf"] Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305751 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kubecfg-setup" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305767 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kubecfg-setup" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305781 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="northd" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305787 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="northd" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305795 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305802 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305879 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305890 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305898 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305904 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305911 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovn-acl-logging" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305917 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovn-acl-logging" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305925 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="nbdb" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305931 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="nbdb" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305942 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kube-rbac-proxy-node" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305948 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kube-rbac-proxy-node" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305959 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305966 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.305978 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="sbdb" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.305999 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="sbdb" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.306012 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovn-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306017 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovn-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.306025 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb349dfb-25a6-4a65-b09e-c237a5369ea2" containerName="collect-profiles" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306031 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb349dfb-25a6-4a65-b09e-c237a5369ea2" containerName="collect-profiles" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306153 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="nbdb" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306166 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306176 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kube-rbac-proxy-node" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306185 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306192 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovn-acl-logging" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306199 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306206 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="sbdb" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306214 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306221 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb349dfb-25a6-4a65-b09e-c237a5369ea2" containerName="collect-profiles" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306229 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306237 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="northd" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306245 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovn-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.306345 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306352 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.306361 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306368 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.306464 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerName="ovnkube-controller" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.308285 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.372638 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovnkube-controller/3.log" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.375334 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovn-acl-logging/0.log" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376106 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wqgmk_02dd5727-894c-4693-9bc7-83dd88ce118c/ovn-controller/0.log" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376576 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1" exitCode=0 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376604 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646" exitCode=0 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376618 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18" exitCode=0 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376631 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835" exitCode=0 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376643 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71" exitCode=0 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376654 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072" exitCode=0 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376665 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e" exitCode=143 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376675 5017 generic.go:334] "Generic (PLEG): container finished" podID="02dd5727-894c-4693-9bc7-83dd88ce118c" containerID="d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d" exitCode=143 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376693 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376713 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376775 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376802 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376819 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376835 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376851 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376890 5017 scope.go:117] "RemoveContainer" containerID="a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376899 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.376993 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377006 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377015 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377024 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377032 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377042 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377049 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377056 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377069 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377085 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377093 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377100 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377108 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377115 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377123 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377131 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377139 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377148 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377155 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377166 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377179 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377191 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377198 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377207 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377214 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377221 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377229 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377237 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377244 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377250 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377260 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wqgmk" event={"ID":"02dd5727-894c-4693-9bc7-83dd88ce118c","Type":"ContainerDied","Data":"aad2e289f0666cf55714aad4f462a5accef4464aa7945ba6782dec4be64a04e9"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377273 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377283 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377291 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377299 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377307 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377314 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377321 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377328 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377334 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.377341 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.380145 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/2.log" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.380801 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/1.log" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.380843 5017 generic.go:334] "Generic (PLEG): container finished" podID="8ae056f0-e054-45da-9638-73074b7c8a3b" containerID="81abd23058c4d929cf01618abcab59dae4c62bd31ead39da8d9483cff0713526" exitCode=2 Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.380870 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9jkcd" event={"ID":"8ae056f0-e054-45da-9638-73074b7c8a3b","Type":"ContainerDied","Data":"81abd23058c4d929cf01618abcab59dae4c62bd31ead39da8d9483cff0713526"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.380889 5017 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252"} Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.381515 5017 scope.go:117] "RemoveContainer" containerID="81abd23058c4d929cf01618abcab59dae4c62bd31ead39da8d9483cff0713526" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.381733 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9jkcd_openshift-multus(8ae056f0-e054-45da-9638-73074b7c8a3b)\"" pod="openshift-multus/multus-9jkcd" podUID="8ae056f0-e054-45da-9638-73074b7c8a3b" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.384481 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-bin\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.384538 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-slash\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.384576 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-ovn-kubernetes\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.384593 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.384613 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-slash" (OuterVolumeSpecName: "host-slash") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.384654 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.384671 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.384608 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-etc-openvswitch\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.385046 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-kubelet\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.385086 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-node-log\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.385191 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-env-overrides\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.385218 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr2h2\" (UniqueName: \"kubernetes.io/projected/02dd5727-894c-4693-9bc7-83dd88ce118c-kube-api-access-tr2h2\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.385546 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-node-log" (OuterVolumeSpecName: "node-log") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.385590 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.385809 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-openvswitch\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.385805 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386092 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-systemd\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386131 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386147 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386189 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386322 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-netd\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386362 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-config\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386391 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-var-lib-openvswitch\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386425 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386454 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386521 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-ovn\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386592 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386789 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-script-lib\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386848 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-netns\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386882 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dd5727-894c-4693-9bc7-83dd88ce118c-ovn-node-metrics-cert\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386885 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386907 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-systemd-units\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386931 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386940 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-log-socket\") pod \"02dd5727-894c-4693-9bc7-83dd88ce118c\" (UID: \"02dd5727-894c-4693-9bc7-83dd88ce118c\") " Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.386974 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387144 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-log-socket" (OuterVolumeSpecName: "log-socket") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387206 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4d925018-2d9e-4498-9f57-3bd7532b2461-ovn-node-metrics-cert\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387245 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-var-lib-openvswitch\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387389 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-systemd-units\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387421 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-slash\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387445 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387451 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-etc-openvswitch\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387545 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4d925018-2d9e-4498-9f57-3bd7532b2461-env-overrides\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387588 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-cni-bin\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387624 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-run-ovn-kubernetes\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387661 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-run-systemd\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387684 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4d925018-2d9e-4498-9f57-3bd7532b2461-ovnkube-script-lib\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387737 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387810 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-node-log\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387838 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4d925018-2d9e-4498-9f57-3bd7532b2461-ovnkube-config\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387866 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-log-socket\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.387922 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-run-ovn\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.388422 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kwbm\" (UniqueName: \"kubernetes.io/projected/4d925018-2d9e-4498-9f57-3bd7532b2461-kube-api-access-7kwbm\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.388470 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-cni-netd\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.388543 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-kubelet\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.388610 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-run-netns\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.388644 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-run-openvswitch\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.388764 5017 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.388797 5017 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.388822 5017 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389478 5017 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389500 5017 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389514 5017 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389529 5017 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389541 5017 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389555 5017 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389567 5017 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389579 5017 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389591 5017 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389603 5017 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389615 5017 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/02dd5727-894c-4693-9bc7-83dd88ce118c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389626 5017 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389641 5017 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.389774 5017 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.399821 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02dd5727-894c-4693-9bc7-83dd88ce118c-kube-api-access-tr2h2" (OuterVolumeSpecName: "kube-api-access-tr2h2") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "kube-api-access-tr2h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.402283 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02dd5727-894c-4693-9bc7-83dd88ce118c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.405690 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "02dd5727-894c-4693-9bc7-83dd88ce118c" (UID: "02dd5727-894c-4693-9bc7-83dd88ce118c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.411395 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.434539 5017 scope.go:117] "RemoveContainer" containerID="f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.450168 5017 scope.go:117] "RemoveContainer" containerID="a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.468490 5017 scope.go:117] "RemoveContainer" containerID="fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.484761 5017 scope.go:117] "RemoveContainer" containerID="7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.490863 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4d925018-2d9e-4498-9f57-3bd7532b2461-ovn-node-metrics-cert\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.490921 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-var-lib-openvswitch\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.490983 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-systemd-units\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491031 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-slash\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491063 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-etc-openvswitch\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491085 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-var-lib-openvswitch\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491128 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-cni-bin\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491091 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-systemd-units\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491094 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-cni-bin\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491187 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4d925018-2d9e-4498-9f57-3bd7532b2461-env-overrides\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491186 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-slash\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491217 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-run-ovn-kubernetes\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491189 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-etc-openvswitch\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491275 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-run-ovn-kubernetes\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491308 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-run-systemd\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491355 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4d925018-2d9e-4498-9f57-3bd7532b2461-ovnkube-script-lib\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491393 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-run-systemd\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491423 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491484 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-node-log\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491534 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4d925018-2d9e-4498-9f57-3bd7532b2461-ovnkube-config\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491610 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-log-socket\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491648 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-node-log\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491742 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-run-ovn\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491782 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-log-socket\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491797 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kwbm\" (UniqueName: \"kubernetes.io/projected/4d925018-2d9e-4498-9f57-3bd7532b2461-kube-api-access-7kwbm\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491851 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-cni-netd\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491830 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-run-ovn\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.491610 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492290 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-cni-netd\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492321 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4d925018-2d9e-4498-9f57-3bd7532b2461-ovnkube-script-lib\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492372 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-kubelet\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492481 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4d925018-2d9e-4498-9f57-3bd7532b2461-env-overrides\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492577 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4d925018-2d9e-4498-9f57-3bd7532b2461-ovnkube-config\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492632 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-kubelet\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492681 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-run-netns\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492710 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-run-openvswitch\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492814 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-host-run-netns\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492834 5017 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/02dd5727-894c-4693-9bc7-83dd88ce118c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492900 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4d925018-2d9e-4498-9f57-3bd7532b2461-run-openvswitch\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.492921 5017 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/02dd5727-894c-4693-9bc7-83dd88ce118c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.494102 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr2h2\" (UniqueName: \"kubernetes.io/projected/02dd5727-894c-4693-9bc7-83dd88ce118c-kube-api-access-tr2h2\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.497092 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4d925018-2d9e-4498-9f57-3bd7532b2461-ovn-node-metrics-cert\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.503853 5017 scope.go:117] "RemoveContainer" containerID="177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.516900 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kwbm\" (UniqueName: \"kubernetes.io/projected/4d925018-2d9e-4498-9f57-3bd7532b2461-kube-api-access-7kwbm\") pod \"ovnkube-node-xsxkf\" (UID: \"4d925018-2d9e-4498-9f57-3bd7532b2461\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.523491 5017 scope.go:117] "RemoveContainer" containerID="749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.543457 5017 scope.go:117] "RemoveContainer" containerID="d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.560717 5017 scope.go:117] "RemoveContainer" containerID="13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.575306 5017 scope.go:117] "RemoveContainer" containerID="a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.575727 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": container with ID starting with a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1 not found: ID does not exist" containerID="a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.575783 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} err="failed to get container status \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": rpc error: code = NotFound desc = could not find container \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": container with ID starting with a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.575830 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.576424 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\": container with ID starting with 62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48 not found: ID does not exist" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.576448 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} err="failed to get container status \"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\": rpc error: code = NotFound desc = could not find container \"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\": container with ID starting with 62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.576465 5017 scope.go:117] "RemoveContainer" containerID="f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.576946 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\": container with ID starting with f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646 not found: ID does not exist" containerID="f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.577014 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} err="failed to get container status \"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\": rpc error: code = NotFound desc = could not find container \"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\": container with ID starting with f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.577058 5017 scope.go:117] "RemoveContainer" containerID="a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.577428 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\": container with ID starting with a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18 not found: ID does not exist" containerID="a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.577458 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} err="failed to get container status \"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\": rpc error: code = NotFound desc = could not find container \"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\": container with ID starting with a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.577477 5017 scope.go:117] "RemoveContainer" containerID="fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.577843 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\": container with ID starting with fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835 not found: ID does not exist" containerID="fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.577903 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} err="failed to get container status \"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\": rpc error: code = NotFound desc = could not find container \"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\": container with ID starting with fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.577943 5017 scope.go:117] "RemoveContainer" containerID="7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.578435 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\": container with ID starting with 7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71 not found: ID does not exist" containerID="7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.578501 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} err="failed to get container status \"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\": rpc error: code = NotFound desc = could not find container \"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\": container with ID starting with 7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.578532 5017 scope.go:117] "RemoveContainer" containerID="177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.578944 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\": container with ID starting with 177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072 not found: ID does not exist" containerID="177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.579028 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} err="failed to get container status \"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\": rpc error: code = NotFound desc = could not find container \"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\": container with ID starting with 177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.579059 5017 scope.go:117] "RemoveContainer" containerID="749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.579710 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\": container with ID starting with 749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e not found: ID does not exist" containerID="749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.579772 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} err="failed to get container status \"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\": rpc error: code = NotFound desc = could not find container \"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\": container with ID starting with 749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.579812 5017 scope.go:117] "RemoveContainer" containerID="d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.580312 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\": container with ID starting with d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d not found: ID does not exist" containerID="d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.580342 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} err="failed to get container status \"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\": rpc error: code = NotFound desc = could not find container \"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\": container with ID starting with d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.580360 5017 scope.go:117] "RemoveContainer" containerID="13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723" Jan 29 06:46:11 crc kubenswrapper[5017]: E0129 06:46:11.580665 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\": container with ID starting with 13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723 not found: ID does not exist" containerID="13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.580712 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723"} err="failed to get container status \"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\": rpc error: code = NotFound desc = could not find container \"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\": container with ID starting with 13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.580740 5017 scope.go:117] "RemoveContainer" containerID="a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.581124 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} err="failed to get container status \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": rpc error: code = NotFound desc = could not find container \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": container with ID starting with a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.581149 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.581483 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} err="failed to get container status \"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\": rpc error: code = NotFound desc = could not find container \"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\": container with ID starting with 62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.581504 5017 scope.go:117] "RemoveContainer" containerID="f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.581905 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} err="failed to get container status \"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\": rpc error: code = NotFound desc = could not find container \"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\": container with ID starting with f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.581953 5017 scope.go:117] "RemoveContainer" containerID="a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.582418 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} err="failed to get container status \"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\": rpc error: code = NotFound desc = could not find container \"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\": container with ID starting with a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.582477 5017 scope.go:117] "RemoveContainer" containerID="fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.582836 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} err="failed to get container status \"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\": rpc error: code = NotFound desc = could not find container \"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\": container with ID starting with fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.582863 5017 scope.go:117] "RemoveContainer" containerID="7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.583195 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} err="failed to get container status \"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\": rpc error: code = NotFound desc = could not find container \"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\": container with ID starting with 7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.583239 5017 scope.go:117] "RemoveContainer" containerID="177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.583610 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} err="failed to get container status \"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\": rpc error: code = NotFound desc = could not find container \"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\": container with ID starting with 177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.583634 5017 scope.go:117] "RemoveContainer" containerID="749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.583945 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} err="failed to get container status \"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\": rpc error: code = NotFound desc = could not find container \"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\": container with ID starting with 749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.584039 5017 scope.go:117] "RemoveContainer" containerID="d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.584395 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} err="failed to get container status \"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\": rpc error: code = NotFound desc = could not find container \"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\": container with ID starting with d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.584415 5017 scope.go:117] "RemoveContainer" containerID="13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.584750 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723"} err="failed to get container status \"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\": rpc error: code = NotFound desc = could not find container \"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\": container with ID starting with 13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.584784 5017 scope.go:117] "RemoveContainer" containerID="a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.585218 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} err="failed to get container status \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": rpc error: code = NotFound desc = could not find container \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": container with ID starting with a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.585239 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.585622 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} err="failed to get container status \"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\": rpc error: code = NotFound desc = could not find container \"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\": container with ID starting with 62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.585643 5017 scope.go:117] "RemoveContainer" containerID="f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.586117 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} err="failed to get container status \"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\": rpc error: code = NotFound desc = could not find container \"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\": container with ID starting with f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.586146 5017 scope.go:117] "RemoveContainer" containerID="a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.586448 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} err="failed to get container status \"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\": rpc error: code = NotFound desc = could not find container \"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\": container with ID starting with a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.586488 5017 scope.go:117] "RemoveContainer" containerID="fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.586826 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} err="failed to get container status \"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\": rpc error: code = NotFound desc = could not find container \"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\": container with ID starting with fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.586852 5017 scope.go:117] "RemoveContainer" containerID="7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.587161 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} err="failed to get container status \"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\": rpc error: code = NotFound desc = could not find container \"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\": container with ID starting with 7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.587196 5017 scope.go:117] "RemoveContainer" containerID="177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.587564 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} err="failed to get container status \"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\": rpc error: code = NotFound desc = could not find container \"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\": container with ID starting with 177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.587599 5017 scope.go:117] "RemoveContainer" containerID="749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.588064 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} err="failed to get container status \"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\": rpc error: code = NotFound desc = could not find container \"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\": container with ID starting with 749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.588159 5017 scope.go:117] "RemoveContainer" containerID="d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.588556 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} err="failed to get container status \"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\": rpc error: code = NotFound desc = could not find container \"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\": container with ID starting with d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.588611 5017 scope.go:117] "RemoveContainer" containerID="13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.588931 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723"} err="failed to get container status \"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\": rpc error: code = NotFound desc = could not find container \"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\": container with ID starting with 13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.588982 5017 scope.go:117] "RemoveContainer" containerID="a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.589351 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} err="failed to get container status \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": rpc error: code = NotFound desc = could not find container \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": container with ID starting with a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.589374 5017 scope.go:117] "RemoveContainer" containerID="62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.589897 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48"} err="failed to get container status \"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\": rpc error: code = NotFound desc = could not find container \"62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48\": container with ID starting with 62449c59cb465c0cc04ebf89eb13daf672b92da42f90dad1f43ac106cf837b48 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.589930 5017 scope.go:117] "RemoveContainer" containerID="f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.590337 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646"} err="failed to get container status \"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\": rpc error: code = NotFound desc = could not find container \"f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646\": container with ID starting with f1e341b7acf35edc3ee9e5a4fec11e2270d45ea0f05df30543d2c9163b024646 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.590362 5017 scope.go:117] "RemoveContainer" containerID="a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.590608 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18"} err="failed to get container status \"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\": rpc error: code = NotFound desc = could not find container \"a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18\": container with ID starting with a1fb588776e0029263904fa0ec3afa53e180b30843864f1dee6999e1cca64b18 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.590634 5017 scope.go:117] "RemoveContainer" containerID="fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.590896 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835"} err="failed to get container status \"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\": rpc error: code = NotFound desc = could not find container \"fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835\": container with ID starting with fc8fc0d9e440c8725e279e6649d3c1e2cdc55589f6b5786187602576e4acd835 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.590929 5017 scope.go:117] "RemoveContainer" containerID="7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.591180 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71"} err="failed to get container status \"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\": rpc error: code = NotFound desc = could not find container \"7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71\": container with ID starting with 7eec8c502e2516637a85d2b83f7f745f9cc5d6a420f15cea465d92f71e0b8e71 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.591207 5017 scope.go:117] "RemoveContainer" containerID="177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.591407 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072"} err="failed to get container status \"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\": rpc error: code = NotFound desc = could not find container \"177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072\": container with ID starting with 177a3e8a7f63905262ffb4ea5f98c4fa97a8d48dd2ff075484a44ea87d66a072 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.591431 5017 scope.go:117] "RemoveContainer" containerID="749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.591692 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e"} err="failed to get container status \"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\": rpc error: code = NotFound desc = could not find container \"749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e\": container with ID starting with 749973b1dc2bcf3fc4e35277f0893a240b3571cadc45529da1229cd7ebe1748e not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.591734 5017 scope.go:117] "RemoveContainer" containerID="d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.592065 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d"} err="failed to get container status \"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\": rpc error: code = NotFound desc = could not find container \"d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d\": container with ID starting with d6a16eea25767bb4fea94880a90ae883944934c09ff5ba0ced9641169691169d not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.592089 5017 scope.go:117] "RemoveContainer" containerID="13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.592364 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723"} err="failed to get container status \"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\": rpc error: code = NotFound desc = could not find container \"13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723\": container with ID starting with 13649086edb9d7ffe20a9040787d55449e6541a51f03432458f5fcb965fe4723 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.592398 5017 scope.go:117] "RemoveContainer" containerID="a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.592646 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1"} err="failed to get container status \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": rpc error: code = NotFound desc = could not find container \"a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1\": container with ID starting with a1cfa089e161c584c6087c42a29c10704135ccb63e4d59248cf9966e6f95c2a1 not found: ID does not exist" Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.629474 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:11 crc kubenswrapper[5017]: W0129 06:46:11.652471 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d925018_2d9e_4498_9f57_3bd7532b2461.slice/crio-497934c1f6a4b0ead8a58210abcf789867baed19a38b4bd1d532ffcad7c7292e WatchSource:0}: Error finding container 497934c1f6a4b0ead8a58210abcf789867baed19a38b4bd1d532ffcad7c7292e: Status 404 returned error can't find the container with id 497934c1f6a4b0ead8a58210abcf789867baed19a38b4bd1d532ffcad7c7292e Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.741722 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wqgmk"] Jan 29 06:46:11 crc kubenswrapper[5017]: I0129 06:46:11.742387 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wqgmk"] Jan 29 06:46:12 crc kubenswrapper[5017]: I0129 06:46:12.328059 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02dd5727-894c-4693-9bc7-83dd88ce118c" path="/var/lib/kubelet/pods/02dd5727-894c-4693-9bc7-83dd88ce118c/volumes" Jan 29 06:46:12 crc kubenswrapper[5017]: I0129 06:46:12.394167 5017 generic.go:334] "Generic (PLEG): container finished" podID="4d925018-2d9e-4498-9f57-3bd7532b2461" containerID="6fdf1921aebdd396b13df0cc28d66ed2e56a0b80d4be4ee2459df952ea651250" exitCode=0 Jan 29 06:46:12 crc kubenswrapper[5017]: I0129 06:46:12.394251 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerDied","Data":"6fdf1921aebdd396b13df0cc28d66ed2e56a0b80d4be4ee2459df952ea651250"} Jan 29 06:46:12 crc kubenswrapper[5017]: I0129 06:46:12.394308 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerStarted","Data":"497934c1f6a4b0ead8a58210abcf789867baed19a38b4bd1d532ffcad7c7292e"} Jan 29 06:46:13 crc kubenswrapper[5017]: I0129 06:46:13.405658 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerStarted","Data":"f2bed39dbc2242dad330330b08a5b889ca8d47fab814f85006473a9da4dc1516"} Jan 29 06:46:13 crc kubenswrapper[5017]: I0129 06:46:13.405962 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerStarted","Data":"3e2bd46d5b1f1234e0a80d3c16bc9b81e925971480440fdfa2a30280e8613487"} Jan 29 06:46:13 crc kubenswrapper[5017]: I0129 06:46:13.405973 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerStarted","Data":"b5a85823f1aee8b15d3f76a4b51df9660312570e6b29faaa61c8fb157b5d0a81"} Jan 29 06:46:13 crc kubenswrapper[5017]: I0129 06:46:13.406028 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerStarted","Data":"b022079c1824ad788623019bfdf6f5c4d67627a4c9da1b8e5687aa9dd495c1d8"} Jan 29 06:46:13 crc kubenswrapper[5017]: I0129 06:46:13.406038 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerStarted","Data":"54ca8614bdefa875da981565a6b3d0432bb09bfd99b4f73fc4b7b41116fd0f87"} Jan 29 06:46:13 crc kubenswrapper[5017]: I0129 06:46:13.406046 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerStarted","Data":"1b582d0ea6588707000b2fd3b4f79905bf1639097812a81785b5ed690e6ef1d6"} Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.675158 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-hdf2s"] Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.676760 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.679542 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.680270 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.680447 5017 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-bz6ff" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.681487 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.746811 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2vp\" (UniqueName: \"kubernetes.io/projected/150cf209-f2d1-4a9a-b965-cd5c4f41106f-kube-api-access-gt2vp\") pod \"crc-storage-crc-hdf2s\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.746868 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/150cf209-f2d1-4a9a-b965-cd5c4f41106f-crc-storage\") pod \"crc-storage-crc-hdf2s\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.747062 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/150cf209-f2d1-4a9a-b965-cd5c4f41106f-node-mnt\") pod \"crc-storage-crc-hdf2s\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.772705 5017 scope.go:117] "RemoveContainer" containerID="a7429ec96a9c4de8ad2620003f7b591d38fef6a019e78098e9ba927e3fd79252" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.848220 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt2vp\" (UniqueName: \"kubernetes.io/projected/150cf209-f2d1-4a9a-b965-cd5c4f41106f-kube-api-access-gt2vp\") pod \"crc-storage-crc-hdf2s\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.848280 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/150cf209-f2d1-4a9a-b965-cd5c4f41106f-crc-storage\") pod \"crc-storage-crc-hdf2s\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.848940 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/150cf209-f2d1-4a9a-b965-cd5c4f41106f-crc-storage\") pod \"crc-storage-crc-hdf2s\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.849003 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/150cf209-f2d1-4a9a-b965-cd5c4f41106f-node-mnt\") pod \"crc-storage-crc-hdf2s\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.849039 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/150cf209-f2d1-4a9a-b965-cd5c4f41106f-node-mnt\") pod \"crc-storage-crc-hdf2s\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:14 crc kubenswrapper[5017]: I0129 06:46:14.883645 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt2vp\" (UniqueName: \"kubernetes.io/projected/150cf209-f2d1-4a9a-b965-cd5c4f41106f-kube-api-access-gt2vp\") pod \"crc-storage-crc-hdf2s\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:15 crc kubenswrapper[5017]: I0129 06:46:15.009997 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:15 crc kubenswrapper[5017]: E0129 06:46:15.074872 5017 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(042d7e167070c7370d21e81e189889ee1218383693191254e7a25b5092f944ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 06:46:15 crc kubenswrapper[5017]: E0129 06:46:15.075629 5017 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(042d7e167070c7370d21e81e189889ee1218383693191254e7a25b5092f944ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:15 crc kubenswrapper[5017]: E0129 06:46:15.075676 5017 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(042d7e167070c7370d21e81e189889ee1218383693191254e7a25b5092f944ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:15 crc kubenswrapper[5017]: E0129 06:46:15.075761 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hdf2s_crc-storage(150cf209-f2d1-4a9a-b965-cd5c4f41106f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hdf2s_crc-storage(150cf209-f2d1-4a9a-b965-cd5c4f41106f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(042d7e167070c7370d21e81e189889ee1218383693191254e7a25b5092f944ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hdf2s" podUID="150cf209-f2d1-4a9a-b965-cd5c4f41106f" Jan 29 06:46:15 crc kubenswrapper[5017]: I0129 06:46:15.427233 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerStarted","Data":"783679cbf90994efe46625d8a039c673026481e28a83d9e11714658df768f4af"} Jan 29 06:46:15 crc kubenswrapper[5017]: I0129 06:46:15.429668 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/2.log" Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.459815 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" event={"ID":"4d925018-2d9e-4498-9f57-3bd7532b2461","Type":"ContainerStarted","Data":"592df42fc172c6b381ea03efd728ed86501264d5e327cb5571bda9fa3f133fb9"} Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.462188 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.462252 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.462279 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.491572 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" podStartSLOduration=7.491543977 podStartE2EDuration="7.491543977s" podCreationTimestamp="2026-01-29 06:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:46:18.489625802 +0000 UTC m=+664.864073432" watchObservedRunningTime="2026-01-29 06:46:18.491543977 +0000 UTC m=+664.865991587" Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.498405 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.498758 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.561108 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hdf2s"] Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.561242 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:18 crc kubenswrapper[5017]: I0129 06:46:18.561801 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:18 crc kubenswrapper[5017]: E0129 06:46:18.587490 5017 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(54fd9d1eb29f09a6b5664e7994a4c9e44a54b0a7ceec4766b8f27bbbebd92ecf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 06:46:18 crc kubenswrapper[5017]: E0129 06:46:18.587602 5017 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(54fd9d1eb29f09a6b5664e7994a4c9e44a54b0a7ceec4766b8f27bbbebd92ecf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:18 crc kubenswrapper[5017]: E0129 06:46:18.587639 5017 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(54fd9d1eb29f09a6b5664e7994a4c9e44a54b0a7ceec4766b8f27bbbebd92ecf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:18 crc kubenswrapper[5017]: E0129 06:46:18.587721 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hdf2s_crc-storage(150cf209-f2d1-4a9a-b965-cd5c4f41106f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hdf2s_crc-storage(150cf209-f2d1-4a9a-b965-cd5c4f41106f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(54fd9d1eb29f09a6b5664e7994a4c9e44a54b0a7ceec4766b8f27bbbebd92ecf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hdf2s" podUID="150cf209-f2d1-4a9a-b965-cd5c4f41106f" Jan 29 06:46:23 crc kubenswrapper[5017]: I0129 06:46:23.316316 5017 scope.go:117] "RemoveContainer" containerID="81abd23058c4d929cf01618abcab59dae4c62bd31ead39da8d9483cff0713526" Jan 29 06:46:23 crc kubenswrapper[5017]: E0129 06:46:23.317095 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9jkcd_openshift-multus(8ae056f0-e054-45da-9638-73074b7c8a3b)\"" pod="openshift-multus/multus-9jkcd" podUID="8ae056f0-e054-45da-9638-73074b7c8a3b" Jan 29 06:46:30 crc kubenswrapper[5017]: I0129 06:46:30.316078 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:30 crc kubenswrapper[5017]: I0129 06:46:30.317427 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:30 crc kubenswrapper[5017]: E0129 06:46:30.372114 5017 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(7311a90aacfd3448b0fa4c4cfa71d12f6bc11ba64982a71f8fa100ad98199f0d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 06:46:30 crc kubenswrapper[5017]: E0129 06:46:30.372225 5017 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(7311a90aacfd3448b0fa4c4cfa71d12f6bc11ba64982a71f8fa100ad98199f0d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:30 crc kubenswrapper[5017]: E0129 06:46:30.372291 5017 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(7311a90aacfd3448b0fa4c4cfa71d12f6bc11ba64982a71f8fa100ad98199f0d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:30 crc kubenswrapper[5017]: E0129 06:46:30.372379 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hdf2s_crc-storage(150cf209-f2d1-4a9a-b965-cd5c4f41106f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hdf2s_crc-storage(150cf209-f2d1-4a9a-b965-cd5c4f41106f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hdf2s_crc-storage_150cf209-f2d1-4a9a-b965-cd5c4f41106f_0(7311a90aacfd3448b0fa4c4cfa71d12f6bc11ba64982a71f8fa100ad98199f0d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hdf2s" podUID="150cf209-f2d1-4a9a-b965-cd5c4f41106f" Jan 29 06:46:38 crc kubenswrapper[5017]: I0129 06:46:38.315950 5017 scope.go:117] "RemoveContainer" containerID="81abd23058c4d929cf01618abcab59dae4c62bd31ead39da8d9483cff0713526" Jan 29 06:46:38 crc kubenswrapper[5017]: I0129 06:46:38.601372 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9jkcd_8ae056f0-e054-45da-9638-73074b7c8a3b/kube-multus/2.log" Jan 29 06:46:38 crc kubenswrapper[5017]: I0129 06:46:38.601870 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9jkcd" event={"ID":"8ae056f0-e054-45da-9638-73074b7c8a3b","Type":"ContainerStarted","Data":"13f99eda8bee0339fe6ebca8398d60ab4bb506a6b3711ab964d635dbc24c0728"} Jan 29 06:46:41 crc kubenswrapper[5017]: I0129 06:46:41.664434 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xsxkf" Jan 29 06:46:43 crc kubenswrapper[5017]: I0129 06:46:43.315601 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:43 crc kubenswrapper[5017]: I0129 06:46:43.316459 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:43 crc kubenswrapper[5017]: I0129 06:46:43.762189 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hdf2s"] Jan 29 06:46:43 crc kubenswrapper[5017]: I0129 06:46:43.774001 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 06:46:44 crc kubenswrapper[5017]: I0129 06:46:44.643702 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hdf2s" event={"ID":"150cf209-f2d1-4a9a-b965-cd5c4f41106f","Type":"ContainerStarted","Data":"dbb90ff374bba9c8a5bdc1851ec9b899d16219f0389ae1b052e44ceaabd3f83e"} Jan 29 06:46:45 crc kubenswrapper[5017]: I0129 06:46:45.651163 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hdf2s" event={"ID":"150cf209-f2d1-4a9a-b965-cd5c4f41106f","Type":"ContainerStarted","Data":"b20f12e3c00f15d4ce71f1344cfa33b544ebcf307d9492ad38717f36f5cb8017"} Jan 29 06:46:46 crc kubenswrapper[5017]: I0129 06:46:46.662308 5017 generic.go:334] "Generic (PLEG): container finished" podID="150cf209-f2d1-4a9a-b965-cd5c4f41106f" containerID="b20f12e3c00f15d4ce71f1344cfa33b544ebcf307d9492ad38717f36f5cb8017" exitCode=0 Jan 29 06:46:46 crc kubenswrapper[5017]: I0129 06:46:46.662385 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hdf2s" event={"ID":"150cf209-f2d1-4a9a-b965-cd5c4f41106f","Type":"ContainerDied","Data":"b20f12e3c00f15d4ce71f1344cfa33b544ebcf307d9492ad38717f36f5cb8017"} Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.001212 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.166382 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/150cf209-f2d1-4a9a-b965-cd5c4f41106f-node-mnt\") pod \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.166569 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/150cf209-f2d1-4a9a-b965-cd5c4f41106f-crc-storage\") pod \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.166650 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt2vp\" (UniqueName: \"kubernetes.io/projected/150cf209-f2d1-4a9a-b965-cd5c4f41106f-kube-api-access-gt2vp\") pod \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\" (UID: \"150cf209-f2d1-4a9a-b965-cd5c4f41106f\") " Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.166665 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150cf209-f2d1-4a9a-b965-cd5c4f41106f-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "150cf209-f2d1-4a9a-b965-cd5c4f41106f" (UID: "150cf209-f2d1-4a9a-b965-cd5c4f41106f"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.167797 5017 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/150cf209-f2d1-4a9a-b965-cd5c4f41106f-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.175464 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150cf209-f2d1-4a9a-b965-cd5c4f41106f-kube-api-access-gt2vp" (OuterVolumeSpecName: "kube-api-access-gt2vp") pod "150cf209-f2d1-4a9a-b965-cd5c4f41106f" (UID: "150cf209-f2d1-4a9a-b965-cd5c4f41106f"). InnerVolumeSpecName "kube-api-access-gt2vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.187782 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150cf209-f2d1-4a9a-b965-cd5c4f41106f-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "150cf209-f2d1-4a9a-b965-cd5c4f41106f" (UID: "150cf209-f2d1-4a9a-b965-cd5c4f41106f"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.269655 5017 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/150cf209-f2d1-4a9a-b965-cd5c4f41106f-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.269713 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt2vp\" (UniqueName: \"kubernetes.io/projected/150cf209-f2d1-4a9a-b965-cd5c4f41106f-kube-api-access-gt2vp\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.675793 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hdf2s" event={"ID":"150cf209-f2d1-4a9a-b965-cd5c4f41106f","Type":"ContainerDied","Data":"dbb90ff374bba9c8a5bdc1851ec9b899d16219f0389ae1b052e44ceaabd3f83e"} Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.676289 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbb90ff374bba9c8a5bdc1851ec9b899d16219f0389ae1b052e44ceaabd3f83e" Jan 29 06:46:48 crc kubenswrapper[5017]: I0129 06:46:48.675913 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hdf2s" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.621549 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd"] Jan 29 06:46:55 crc kubenswrapper[5017]: E0129 06:46:55.622694 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150cf209-f2d1-4a9a-b965-cd5c4f41106f" containerName="storage" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.622710 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="150cf209-f2d1-4a9a-b965-cd5c4f41106f" containerName="storage" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.622811 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="150cf209-f2d1-4a9a-b965-cd5c4f41106f" containerName="storage" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.623684 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.626099 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.633396 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd"] Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.677204 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.677278 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.677363 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvrzw\" (UniqueName: \"kubernetes.io/projected/bed5dfbe-e294-4d8a-b3c6-953287cb9057-kube-api-access-zvrzw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.777709 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.777809 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvrzw\" (UniqueName: \"kubernetes.io/projected/bed5dfbe-e294-4d8a-b3c6-953287cb9057-kube-api-access-zvrzw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.777847 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.778578 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.778627 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.810237 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvrzw\" (UniqueName: \"kubernetes.io/projected/bed5dfbe-e294-4d8a-b3c6-953287cb9057-kube-api-access-zvrzw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:55 crc kubenswrapper[5017]: I0129 06:46:55.947615 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:46:56 crc kubenswrapper[5017]: I0129 06:46:56.248251 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd"] Jan 29 06:46:56 crc kubenswrapper[5017]: W0129 06:46:56.257359 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbed5dfbe_e294_4d8a_b3c6_953287cb9057.slice/crio-f9b673d9fe89b08a198d363d3c694e9378643d6fcd46c00d41c6a3fbeddfbddd WatchSource:0}: Error finding container f9b673d9fe89b08a198d363d3c694e9378643d6fcd46c00d41c6a3fbeddfbddd: Status 404 returned error can't find the container with id f9b673d9fe89b08a198d363d3c694e9378643d6fcd46c00d41c6a3fbeddfbddd Jan 29 06:46:56 crc kubenswrapper[5017]: I0129 06:46:56.539574 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:46:56 crc kubenswrapper[5017]: I0129 06:46:56.539665 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:46:56 crc kubenswrapper[5017]: I0129 06:46:56.743948 5017 generic.go:334] "Generic (PLEG): container finished" podID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerID="f07ca5608c6171f6914ba634c4e1f928c6d49b5245089914fa33016be34819bc" exitCode=0 Jan 29 06:46:56 crc kubenswrapper[5017]: I0129 06:46:56.744075 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" event={"ID":"bed5dfbe-e294-4d8a-b3c6-953287cb9057","Type":"ContainerDied","Data":"f07ca5608c6171f6914ba634c4e1f928c6d49b5245089914fa33016be34819bc"} Jan 29 06:46:56 crc kubenswrapper[5017]: I0129 06:46:56.744169 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" event={"ID":"bed5dfbe-e294-4d8a-b3c6-953287cb9057","Type":"ContainerStarted","Data":"f9b673d9fe89b08a198d363d3c694e9378643d6fcd46c00d41c6a3fbeddfbddd"} Jan 29 06:46:58 crc kubenswrapper[5017]: I0129 06:46:58.761676 5017 generic.go:334] "Generic (PLEG): container finished" podID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerID="db181417c8a161391627271e2dba75e0dd9e304033f8b0811a54fdbd792d70cb" exitCode=0 Jan 29 06:46:58 crc kubenswrapper[5017]: I0129 06:46:58.761912 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" event={"ID":"bed5dfbe-e294-4d8a-b3c6-953287cb9057","Type":"ContainerDied","Data":"db181417c8a161391627271e2dba75e0dd9e304033f8b0811a54fdbd792d70cb"} Jan 29 06:46:59 crc kubenswrapper[5017]: I0129 06:46:59.838256 5017 generic.go:334] "Generic (PLEG): container finished" podID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerID="57c0756df1c06a233ba852a77d699246e33f1bdbbc043105b31ea92ce0373380" exitCode=0 Jan 29 06:46:59 crc kubenswrapper[5017]: I0129 06:46:59.838352 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" event={"ID":"bed5dfbe-e294-4d8a-b3c6-953287cb9057","Type":"ContainerDied","Data":"57c0756df1c06a233ba852a77d699246e33f1bdbbc043105b31ea92ce0373380"} Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.159600 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.249197 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvrzw\" (UniqueName: \"kubernetes.io/projected/bed5dfbe-e294-4d8a-b3c6-953287cb9057-kube-api-access-zvrzw\") pod \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.249439 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-util\") pod \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.249581 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-bundle\") pod \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\" (UID: \"bed5dfbe-e294-4d8a-b3c6-953287cb9057\") " Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.250237 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-bundle" (OuterVolumeSpecName: "bundle") pod "bed5dfbe-e294-4d8a-b3c6-953287cb9057" (UID: "bed5dfbe-e294-4d8a-b3c6-953287cb9057"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.256715 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed5dfbe-e294-4d8a-b3c6-953287cb9057-kube-api-access-zvrzw" (OuterVolumeSpecName: "kube-api-access-zvrzw") pod "bed5dfbe-e294-4d8a-b3c6-953287cb9057" (UID: "bed5dfbe-e294-4d8a-b3c6-953287cb9057"). InnerVolumeSpecName "kube-api-access-zvrzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.281478 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-util" (OuterVolumeSpecName: "util") pod "bed5dfbe-e294-4d8a-b3c6-953287cb9057" (UID: "bed5dfbe-e294-4d8a-b3c6-953287cb9057"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.351223 5017 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.351659 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvrzw\" (UniqueName: \"kubernetes.io/projected/bed5dfbe-e294-4d8a-b3c6-953287cb9057-kube-api-access-zvrzw\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.351790 5017 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bed5dfbe-e294-4d8a-b3c6-953287cb9057-util\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.857683 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" event={"ID":"bed5dfbe-e294-4d8a-b3c6-953287cb9057","Type":"ContainerDied","Data":"f9b673d9fe89b08a198d363d3c694e9378643d6fcd46c00d41c6a3fbeddfbddd"} Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.857726 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9b673d9fe89b08a198d363d3c694e9378643d6fcd46c00d41c6a3fbeddfbddd" Jan 29 06:47:01 crc kubenswrapper[5017]: I0129 06:47:01.857837 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.807786 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fnwvc"] Jan 29 06:47:03 crc kubenswrapper[5017]: E0129 06:47:03.808381 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerName="pull" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.808395 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerName="pull" Jan 29 06:47:03 crc kubenswrapper[5017]: E0129 06:47:03.808405 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerName="util" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.808411 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerName="util" Jan 29 06:47:03 crc kubenswrapper[5017]: E0129 06:47:03.808423 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerName="extract" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.808429 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerName="extract" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.808561 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed5dfbe-e294-4d8a-b3c6-953287cb9057" containerName="extract" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.809003 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fnwvc" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.811744 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-nbb8w" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.812281 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.814394 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.824734 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fnwvc"] Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.892285 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dsp\" (UniqueName: \"kubernetes.io/projected/f4986274-c67e-4d15-a613-ed6e440526e5-kube-api-access-p7dsp\") pod \"nmstate-operator-646758c888-fnwvc\" (UID: \"f4986274-c67e-4d15-a613-ed6e440526e5\") " pod="openshift-nmstate/nmstate-operator-646758c888-fnwvc" Jan 29 06:47:03 crc kubenswrapper[5017]: I0129 06:47:03.994058 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7dsp\" (UniqueName: \"kubernetes.io/projected/f4986274-c67e-4d15-a613-ed6e440526e5-kube-api-access-p7dsp\") pod \"nmstate-operator-646758c888-fnwvc\" (UID: \"f4986274-c67e-4d15-a613-ed6e440526e5\") " pod="openshift-nmstate/nmstate-operator-646758c888-fnwvc" Jan 29 06:47:04 crc kubenswrapper[5017]: I0129 06:47:04.020179 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7dsp\" (UniqueName: \"kubernetes.io/projected/f4986274-c67e-4d15-a613-ed6e440526e5-kube-api-access-p7dsp\") pod \"nmstate-operator-646758c888-fnwvc\" (UID: \"f4986274-c67e-4d15-a613-ed6e440526e5\") " pod="openshift-nmstate/nmstate-operator-646758c888-fnwvc" Jan 29 06:47:04 crc kubenswrapper[5017]: I0129 06:47:04.127471 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fnwvc" Jan 29 06:47:04 crc kubenswrapper[5017]: I0129 06:47:04.343346 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fnwvc"] Jan 29 06:47:04 crc kubenswrapper[5017]: I0129 06:47:04.875219 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fnwvc" event={"ID":"f4986274-c67e-4d15-a613-ed6e440526e5","Type":"ContainerStarted","Data":"4ba11a139f5e9ed8d788b96e0f46e36b0a402a84e6e01a3d7fb5519e853209ad"} Jan 29 06:47:06 crc kubenswrapper[5017]: I0129 06:47:06.894842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fnwvc" event={"ID":"f4986274-c67e-4d15-a613-ed6e440526e5","Type":"ContainerStarted","Data":"1d2fce9b9bb3098af9cdeeb816c949789bd707eb8870cf238ea1d7aa4fc9bb2e"} Jan 29 06:47:06 crc kubenswrapper[5017]: I0129 06:47:06.914348 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-fnwvc" podStartSLOduration=1.6675089939999999 podStartE2EDuration="3.914323232s" podCreationTimestamp="2026-01-29 06:47:03 +0000 UTC" firstStartedPulling="2026-01-29 06:47:04.355782565 +0000 UTC m=+710.730230175" lastFinishedPulling="2026-01-29 06:47:06.602596803 +0000 UTC m=+712.977044413" observedRunningTime="2026-01-29 06:47:06.911718781 +0000 UTC m=+713.286166411" watchObservedRunningTime="2026-01-29 06:47:06.914323232 +0000 UTC m=+713.288770842" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.786140 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-gpxgz"] Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.787264 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-gpxgz" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.789255 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-n755t" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.810377 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-gpxgz"] Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.819048 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc"] Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.819783 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.822359 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.838718 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-kwtrf"] Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.839519 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.902159 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc"] Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.943060 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bfe492e6-837a-4318-a908-125c9cc736d0-dbus-socket\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.943108 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddbg\" (UniqueName: \"kubernetes.io/projected/59ffce2a-c49d-42c3-b665-c2cab504e523-kube-api-access-5ddbg\") pod \"nmstate-metrics-54757c584b-gpxgz\" (UID: \"59ffce2a-c49d-42c3-b665-c2cab504e523\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-gpxgz" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.943203 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bfe492e6-837a-4318-a908-125c9cc736d0-ovs-socket\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.943245 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9lv\" (UniqueName: \"kubernetes.io/projected/8da9c51f-d7bf-499a-a29b-348743eb72ad-kube-api-access-tt9lv\") pod \"nmstate-webhook-8474b5b9d8-ljkjc\" (UID: \"8da9c51f-d7bf-499a-a29b-348743eb72ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.943275 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8da9c51f-d7bf-499a-a29b-348743eb72ad-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ljkjc\" (UID: \"8da9c51f-d7bf-499a-a29b-348743eb72ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.943334 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bfe492e6-837a-4318-a908-125c9cc736d0-nmstate-lock\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:07 crc kubenswrapper[5017]: I0129 06:47:07.943362 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wnlf\" (UniqueName: \"kubernetes.io/projected/bfe492e6-837a-4318-a908-125c9cc736d0-kube-api-access-8wnlf\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.044934 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bfe492e6-837a-4318-a908-125c9cc736d0-ovs-socket\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.045001 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt9lv\" (UniqueName: \"kubernetes.io/projected/8da9c51f-d7bf-499a-a29b-348743eb72ad-kube-api-access-tt9lv\") pod \"nmstate-webhook-8474b5b9d8-ljkjc\" (UID: \"8da9c51f-d7bf-499a-a29b-348743eb72ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.045022 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8da9c51f-d7bf-499a-a29b-348743eb72ad-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ljkjc\" (UID: \"8da9c51f-d7bf-499a-a29b-348743eb72ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.045032 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bfe492e6-837a-4318-a908-125c9cc736d0-ovs-socket\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.045072 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bfe492e6-837a-4318-a908-125c9cc736d0-nmstate-lock\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.045106 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wnlf\" (UniqueName: \"kubernetes.io/projected/bfe492e6-837a-4318-a908-125c9cc736d0-kube-api-access-8wnlf\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.045144 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bfe492e6-837a-4318-a908-125c9cc736d0-dbus-socket\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.045163 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ddbg\" (UniqueName: \"kubernetes.io/projected/59ffce2a-c49d-42c3-b665-c2cab504e523-kube-api-access-5ddbg\") pod \"nmstate-metrics-54757c584b-gpxgz\" (UID: \"59ffce2a-c49d-42c3-b665-c2cab504e523\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-gpxgz" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.045449 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bfe492e6-837a-4318-a908-125c9cc736d0-nmstate-lock\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.045690 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bfe492e6-837a-4318-a908-125c9cc736d0-dbus-socket\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.051005 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8da9c51f-d7bf-499a-a29b-348743eb72ad-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ljkjc\" (UID: \"8da9c51f-d7bf-499a-a29b-348743eb72ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.080267 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ddbg\" (UniqueName: \"kubernetes.io/projected/59ffce2a-c49d-42c3-b665-c2cab504e523-kube-api-access-5ddbg\") pod \"nmstate-metrics-54757c584b-gpxgz\" (UID: \"59ffce2a-c49d-42c3-b665-c2cab504e523\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-gpxgz" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.081551 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wnlf\" (UniqueName: \"kubernetes.io/projected/bfe492e6-837a-4318-a908-125c9cc736d0-kube-api-access-8wnlf\") pod \"nmstate-handler-kwtrf\" (UID: \"bfe492e6-837a-4318-a908-125c9cc736d0\") " pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.087533 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt9lv\" (UniqueName: \"kubernetes.io/projected/8da9c51f-d7bf-499a-a29b-348743eb72ad-kube-api-access-tt9lv\") pod \"nmstate-webhook-8474b5b9d8-ljkjc\" (UID: \"8da9c51f-d7bf-499a-a29b-348743eb72ad\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.103363 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l"] Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.103563 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-gpxgz" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.104292 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.107941 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qg9w8" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.109023 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.110351 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.114935 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l"] Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.136176 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.173702 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.249852 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77f8b370-cba3-4b23-956d-85e2eac24634-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-c979l\" (UID: \"77f8b370-cba3-4b23-956d-85e2eac24634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.250199 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hd2v\" (UniqueName: \"kubernetes.io/projected/77f8b370-cba3-4b23-956d-85e2eac24634-kube-api-access-8hd2v\") pod \"nmstate-console-plugin-7754f76f8b-c979l\" (UID: \"77f8b370-cba3-4b23-956d-85e2eac24634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.250234 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77f8b370-cba3-4b23-956d-85e2eac24634-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-c979l\" (UID: \"77f8b370-cba3-4b23-956d-85e2eac24634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.333352 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79b486fcf8-j7rvv"] Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.334162 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.341645 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79b486fcf8-j7rvv"] Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.351069 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77f8b370-cba3-4b23-956d-85e2eac24634-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-c979l\" (UID: \"77f8b370-cba3-4b23-956d-85e2eac24634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.351119 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hd2v\" (UniqueName: \"kubernetes.io/projected/77f8b370-cba3-4b23-956d-85e2eac24634-kube-api-access-8hd2v\") pod \"nmstate-console-plugin-7754f76f8b-c979l\" (UID: \"77f8b370-cba3-4b23-956d-85e2eac24634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.351165 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77f8b370-cba3-4b23-956d-85e2eac24634-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-c979l\" (UID: \"77f8b370-cba3-4b23-956d-85e2eac24634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.353881 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77f8b370-cba3-4b23-956d-85e2eac24634-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-c979l\" (UID: \"77f8b370-cba3-4b23-956d-85e2eac24634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.360351 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77f8b370-cba3-4b23-956d-85e2eac24634-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-c979l\" (UID: \"77f8b370-cba3-4b23-956d-85e2eac24634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.378537 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hd2v\" (UniqueName: \"kubernetes.io/projected/77f8b370-cba3-4b23-956d-85e2eac24634-kube-api-access-8hd2v\") pod \"nmstate-console-plugin-7754f76f8b-c979l\" (UID: \"77f8b370-cba3-4b23-956d-85e2eac24634\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.394970 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-gpxgz"] Jan 29 06:47:08 crc kubenswrapper[5017]: W0129 06:47:08.400310 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59ffce2a_c49d_42c3_b665_c2cab504e523.slice/crio-2397aad6e4efcd903596c390cef3fc11c036bc4fac20f971df3dfc20405aa86b WatchSource:0}: Error finding container 2397aad6e4efcd903596c390cef3fc11c036bc4fac20f971df3dfc20405aa86b: Status 404 returned error can't find the container with id 2397aad6e4efcd903596c390cef3fc11c036bc4fac20f971df3dfc20405aa86b Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.452276 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-trusted-ca-bundle\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.452326 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-console-config\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.452351 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d083e8c3-449b-4a41-a188-bdc63fd72ad1-console-serving-cert\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.452390 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-oauth-serving-cert\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.452410 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d083e8c3-449b-4a41-a188-bdc63fd72ad1-console-oauth-config\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.452429 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-service-ca\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.452457 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxtbd\" (UniqueName: \"kubernetes.io/projected/d083e8c3-449b-4a41-a188-bdc63fd72ad1-kube-api-access-gxtbd\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.467233 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.486503 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc"] Jan 29 06:47:08 crc kubenswrapper[5017]: W0129 06:47:08.494015 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8da9c51f_d7bf_499a_a29b_348743eb72ad.slice/crio-902c52b9645675d97ed9d256e34dcd38da02424c678056b465d68def420268c0 WatchSource:0}: Error finding container 902c52b9645675d97ed9d256e34dcd38da02424c678056b465d68def420268c0: Status 404 returned error can't find the container with id 902c52b9645675d97ed9d256e34dcd38da02424c678056b465d68def420268c0 Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.553399 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d083e8c3-449b-4a41-a188-bdc63fd72ad1-console-oauth-config\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.553447 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-service-ca\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.553481 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxtbd\" (UniqueName: \"kubernetes.io/projected/d083e8c3-449b-4a41-a188-bdc63fd72ad1-kube-api-access-gxtbd\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.553512 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-trusted-ca-bundle\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.553535 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-console-config\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.553555 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d083e8c3-449b-4a41-a188-bdc63fd72ad1-console-serving-cert\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.553594 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-oauth-serving-cert\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.554839 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-oauth-serving-cert\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.555460 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-trusted-ca-bundle\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.556889 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-console-config\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.559250 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d083e8c3-449b-4a41-a188-bdc63fd72ad1-console-serving-cert\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.561090 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d083e8c3-449b-4a41-a188-bdc63fd72ad1-console-oauth-config\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.563120 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d083e8c3-449b-4a41-a188-bdc63fd72ad1-service-ca\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.578246 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxtbd\" (UniqueName: \"kubernetes.io/projected/d083e8c3-449b-4a41-a188-bdc63fd72ad1-kube-api-access-gxtbd\") pod \"console-79b486fcf8-j7rvv\" (UID: \"d083e8c3-449b-4a41-a188-bdc63fd72ad1\") " pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.651458 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.690819 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l"] Jan 29 06:47:08 crc kubenswrapper[5017]: W0129 06:47:08.711893 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77f8b370_cba3_4b23_956d_85e2eac24634.slice/crio-71cbfbc6d0daef4f2e1e2f171fce4499a83e78a31d1a088651bb231d253093fd WatchSource:0}: Error finding container 71cbfbc6d0daef4f2e1e2f171fce4499a83e78a31d1a088651bb231d253093fd: Status 404 returned error can't find the container with id 71cbfbc6d0daef4f2e1e2f171fce4499a83e78a31d1a088651bb231d253093fd Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.906711 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79b486fcf8-j7rvv"] Jan 29 06:47:08 crc kubenswrapper[5017]: W0129 06:47:08.915232 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd083e8c3_449b_4a41_a188_bdc63fd72ad1.slice/crio-e0631da899491b879494875c2828f7e524f8c72a3dec171619367b9bfbffe974 WatchSource:0}: Error finding container e0631da899491b879494875c2828f7e524f8c72a3dec171619367b9bfbffe974: Status 404 returned error can't find the container with id e0631da899491b879494875c2828f7e524f8c72a3dec171619367b9bfbffe974 Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.920242 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" event={"ID":"77f8b370-cba3-4b23-956d-85e2eac24634","Type":"ContainerStarted","Data":"71cbfbc6d0daef4f2e1e2f171fce4499a83e78a31d1a088651bb231d253093fd"} Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.921848 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kwtrf" event={"ID":"bfe492e6-837a-4318-a908-125c9cc736d0","Type":"ContainerStarted","Data":"4f2e485e6167a0cb81c88a1fcb7d944dd2a1e19278b18ec845c6fa3f19a6799a"} Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.922968 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" event={"ID":"8da9c51f-d7bf-499a-a29b-348743eb72ad","Type":"ContainerStarted","Data":"902c52b9645675d97ed9d256e34dcd38da02424c678056b465d68def420268c0"} Jan 29 06:47:08 crc kubenswrapper[5017]: I0129 06:47:08.924768 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-gpxgz" event={"ID":"59ffce2a-c49d-42c3-b665-c2cab504e523","Type":"ContainerStarted","Data":"2397aad6e4efcd903596c390cef3fc11c036bc4fac20f971df3dfc20405aa86b"} Jan 29 06:47:09 crc kubenswrapper[5017]: I0129 06:47:09.933499 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79b486fcf8-j7rvv" event={"ID":"d083e8c3-449b-4a41-a188-bdc63fd72ad1","Type":"ContainerStarted","Data":"62b3e0f8e239710ce73bb69f2cb05c26941049103cff92fbc41847a02c028094"} Jan 29 06:47:09 crc kubenswrapper[5017]: I0129 06:47:09.933884 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79b486fcf8-j7rvv" event={"ID":"d083e8c3-449b-4a41-a188-bdc63fd72ad1","Type":"ContainerStarted","Data":"e0631da899491b879494875c2828f7e524f8c72a3dec171619367b9bfbffe974"} Jan 29 06:47:09 crc kubenswrapper[5017]: I0129 06:47:09.959591 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79b486fcf8-j7rvv" podStartSLOduration=1.9595667140000002 podStartE2EDuration="1.959566714s" podCreationTimestamp="2026-01-29 06:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:47:09.959168734 +0000 UTC m=+716.333616364" watchObservedRunningTime="2026-01-29 06:47:09.959566714 +0000 UTC m=+716.334014324" Jan 29 06:47:10 crc kubenswrapper[5017]: I0129 06:47:10.941647 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-gpxgz" event={"ID":"59ffce2a-c49d-42c3-b665-c2cab504e523","Type":"ContainerStarted","Data":"9ad4815d26897317ef9a533f1a6b9906afd96fdff607f0e104c180c19ab4e2b0"} Jan 29 06:47:10 crc kubenswrapper[5017]: I0129 06:47:10.945193 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" event={"ID":"8da9c51f-d7bf-499a-a29b-348743eb72ad","Type":"ContainerStarted","Data":"962176d782609de26c6359a1c6ba74abc047cd6eea4752d3268d63ea97a07f25"} Jan 29 06:47:10 crc kubenswrapper[5017]: I0129 06:47:10.945255 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:11 crc kubenswrapper[5017]: I0129 06:47:11.957921 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" event={"ID":"77f8b370-cba3-4b23-956d-85e2eac24634","Type":"ContainerStarted","Data":"4b8229fc52f09c09f0ccdf432e7d83c58a4f3e565fda5e42619784967e9a1eae"} Jan 29 06:47:11 crc kubenswrapper[5017]: I0129 06:47:11.959536 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kwtrf" event={"ID":"bfe492e6-837a-4318-a908-125c9cc736d0","Type":"ContainerStarted","Data":"7f055390557a5883a626d9e1b4c3c3d02d8822b6819902e803e69c989b70f7df"} Jan 29 06:47:11 crc kubenswrapper[5017]: I0129 06:47:11.959697 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:11 crc kubenswrapper[5017]: I0129 06:47:11.982658 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" podStartSLOduration=2.813462533 podStartE2EDuration="4.982629482s" podCreationTimestamp="2026-01-29 06:47:07 +0000 UTC" firstStartedPulling="2026-01-29 06:47:08.497257531 +0000 UTC m=+714.871705141" lastFinishedPulling="2026-01-29 06:47:10.66642448 +0000 UTC m=+717.040872090" observedRunningTime="2026-01-29 06:47:10.968334928 +0000 UTC m=+717.342782538" watchObservedRunningTime="2026-01-29 06:47:11.982629482 +0000 UTC m=+718.357077092" Jan 29 06:47:11 crc kubenswrapper[5017]: I0129 06:47:11.983263 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-c979l" podStartSLOduration=0.981277866 podStartE2EDuration="3.983255716s" podCreationTimestamp="2026-01-29 06:47:08 +0000 UTC" firstStartedPulling="2026-01-29 06:47:08.714892712 +0000 UTC m=+715.089340322" lastFinishedPulling="2026-01-29 06:47:11.716870562 +0000 UTC m=+718.091318172" observedRunningTime="2026-01-29 06:47:11.97914744 +0000 UTC m=+718.353595050" watchObservedRunningTime="2026-01-29 06:47:11.983255716 +0000 UTC m=+718.357703346" Jan 29 06:47:12 crc kubenswrapper[5017]: I0129 06:47:12.003720 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-kwtrf" podStartSLOduration=2.583913435 podStartE2EDuration="5.003695496s" podCreationTimestamp="2026-01-29 06:47:07 +0000 UTC" firstStartedPulling="2026-01-29 06:47:08.208614115 +0000 UTC m=+714.583061715" lastFinishedPulling="2026-01-29 06:47:10.628396156 +0000 UTC m=+717.002843776" observedRunningTime="2026-01-29 06:47:12.001103225 +0000 UTC m=+718.375550845" watchObservedRunningTime="2026-01-29 06:47:12.003695496 +0000 UTC m=+718.378143106" Jan 29 06:47:13 crc kubenswrapper[5017]: I0129 06:47:13.985464 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-gpxgz" event={"ID":"59ffce2a-c49d-42c3-b665-c2cab504e523","Type":"ContainerStarted","Data":"7264fc36170526915c1b2ff0a504a4a5dac673d600b328878c6f6e06fca4865e"} Jan 29 06:47:18 crc kubenswrapper[5017]: I0129 06:47:18.202025 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-kwtrf" Jan 29 06:47:18 crc kubenswrapper[5017]: I0129 06:47:18.231096 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-gpxgz" podStartSLOduration=6.590387619 podStartE2EDuration="11.231073402s" podCreationTimestamp="2026-01-29 06:47:07 +0000 UTC" firstStartedPulling="2026-01-29 06:47:08.40259292 +0000 UTC m=+714.777040530" lastFinishedPulling="2026-01-29 06:47:13.043278693 +0000 UTC m=+719.417726313" observedRunningTime="2026-01-29 06:47:14.008331461 +0000 UTC m=+720.382779091" watchObservedRunningTime="2026-01-29 06:47:18.231073402 +0000 UTC m=+724.605521012" Jan 29 06:47:18 crc kubenswrapper[5017]: I0129 06:47:18.652404 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:18 crc kubenswrapper[5017]: I0129 06:47:18.654923 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:18 crc kubenswrapper[5017]: I0129 06:47:18.661946 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:19 crc kubenswrapper[5017]: I0129 06:47:19.027105 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79b486fcf8-j7rvv" Jan 29 06:47:19 crc kubenswrapper[5017]: I0129 06:47:19.098494 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z5brc"] Jan 29 06:47:26 crc kubenswrapper[5017]: I0129 06:47:26.539607 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:47:26 crc kubenswrapper[5017]: I0129 06:47:26.540692 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:47:28 crc kubenswrapper[5017]: I0129 06:47:28.147932 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ljkjc" Jan 29 06:47:42 crc kubenswrapper[5017]: I0129 06:47:42.928219 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6"] Jan 29 06:47:42 crc kubenswrapper[5017]: I0129 06:47:42.929974 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:42 crc kubenswrapper[5017]: I0129 06:47:42.932212 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 06:47:42 crc kubenswrapper[5017]: I0129 06:47:42.941293 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6"] Jan 29 06:47:42 crc kubenswrapper[5017]: I0129 06:47:42.955531 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:42 crc kubenswrapper[5017]: I0129 06:47:42.955594 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzb8s\" (UniqueName: \"kubernetes.io/projected/8ab2e317-a111-486e-aff4-2bf131383d02-kube-api-access-xzb8s\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:42 crc kubenswrapper[5017]: I0129 06:47:42.955666 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:43 crc kubenswrapper[5017]: I0129 06:47:43.056768 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:43 crc kubenswrapper[5017]: I0129 06:47:43.056823 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzb8s\" (UniqueName: \"kubernetes.io/projected/8ab2e317-a111-486e-aff4-2bf131383d02-kube-api-access-xzb8s\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:43 crc kubenswrapper[5017]: I0129 06:47:43.056879 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:43 crc kubenswrapper[5017]: I0129 06:47:43.057352 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:43 crc kubenswrapper[5017]: I0129 06:47:43.057378 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:43 crc kubenswrapper[5017]: I0129 06:47:43.081564 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzb8s\" (UniqueName: \"kubernetes.io/projected/8ab2e317-a111-486e-aff4-2bf131383d02-kube-api-access-xzb8s\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:43 crc kubenswrapper[5017]: I0129 06:47:43.248342 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:43 crc kubenswrapper[5017]: I0129 06:47:43.474209 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6"] Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.165687 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-z5brc" podUID="23943ec6-beb6-4bef-b4b1-e5c840ab997b" containerName="console" containerID="cri-o://bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6" gracePeriod=15 Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.209128 5017 generic.go:334] "Generic (PLEG): container finished" podID="8ab2e317-a111-486e-aff4-2bf131383d02" containerID="66c3031f7ce875725e7db43da471c4f634a73a119fd3680cd0a907a4cb09d3cc" exitCode=0 Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.209188 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" event={"ID":"8ab2e317-a111-486e-aff4-2bf131383d02","Type":"ContainerDied","Data":"66c3031f7ce875725e7db43da471c4f634a73a119fd3680cd0a907a4cb09d3cc"} Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.209223 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" event={"ID":"8ab2e317-a111-486e-aff4-2bf131383d02","Type":"ContainerStarted","Data":"cf6eb20037bf10701082a492b121a8df3dbbc6c2ec7ec6729d16c080f9b61847"} Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.536516 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z5brc_23943ec6-beb6-4bef-b4b1-e5c840ab997b/console/0.log" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.536640 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.688507 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-oauth-config\") pod \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.688716 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-serving-cert\") pod \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.688788 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-config\") pod \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.688831 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-trusted-ca-bundle\") pod \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.688862 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-service-ca\") pod \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.688904 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5dsg\" (UniqueName: \"kubernetes.io/projected/23943ec6-beb6-4bef-b4b1-e5c840ab997b-kube-api-access-j5dsg\") pod \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.688978 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-oauth-serving-cert\") pod \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\" (UID: \"23943ec6-beb6-4bef-b4b1-e5c840ab997b\") " Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.690221 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-config" (OuterVolumeSpecName: "console-config") pod "23943ec6-beb6-4bef-b4b1-e5c840ab997b" (UID: "23943ec6-beb6-4bef-b4b1-e5c840ab997b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.690344 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-service-ca" (OuterVolumeSpecName: "service-ca") pod "23943ec6-beb6-4bef-b4b1-e5c840ab997b" (UID: "23943ec6-beb6-4bef-b4b1-e5c840ab997b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.690798 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "23943ec6-beb6-4bef-b4b1-e5c840ab997b" (UID: "23943ec6-beb6-4bef-b4b1-e5c840ab997b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.691263 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "23943ec6-beb6-4bef-b4b1-e5c840ab997b" (UID: "23943ec6-beb6-4bef-b4b1-e5c840ab997b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.704320 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23943ec6-beb6-4bef-b4b1-e5c840ab997b-kube-api-access-j5dsg" (OuterVolumeSpecName: "kube-api-access-j5dsg") pod "23943ec6-beb6-4bef-b4b1-e5c840ab997b" (UID: "23943ec6-beb6-4bef-b4b1-e5c840ab997b"). InnerVolumeSpecName "kube-api-access-j5dsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.704395 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "23943ec6-beb6-4bef-b4b1-e5c840ab997b" (UID: "23943ec6-beb6-4bef-b4b1-e5c840ab997b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.705080 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "23943ec6-beb6-4bef-b4b1-e5c840ab997b" (UID: "23943ec6-beb6-4bef-b4b1-e5c840ab997b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.792530 5017 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.792573 5017 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.792584 5017 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.792596 5017 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.792605 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5dsg\" (UniqueName: \"kubernetes.io/projected/23943ec6-beb6-4bef-b4b1-e5c840ab997b-kube-api-access-j5dsg\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.792617 5017 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/23943ec6-beb6-4bef-b4b1-e5c840ab997b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:44 crc kubenswrapper[5017]: I0129 06:47:44.792626 5017 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/23943ec6-beb6-4bef-b4b1-e5c840ab997b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.223227 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z5brc_23943ec6-beb6-4bef-b4b1-e5c840ab997b/console/0.log" Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.224092 5017 generic.go:334] "Generic (PLEG): container finished" podID="23943ec6-beb6-4bef-b4b1-e5c840ab997b" containerID="bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6" exitCode=2 Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.224162 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z5brc" event={"ID":"23943ec6-beb6-4bef-b4b1-e5c840ab997b","Type":"ContainerDied","Data":"bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6"} Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.224206 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z5brc" event={"ID":"23943ec6-beb6-4bef-b4b1-e5c840ab997b","Type":"ContainerDied","Data":"def5d1b497268ffd07f193c337b524c3b9efb3a7e71c236747cf8ef8f99f38d5"} Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.224231 5017 scope.go:117] "RemoveContainer" containerID="bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6" Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.224425 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z5brc" Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.245858 5017 scope.go:117] "RemoveContainer" containerID="bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6" Jan 29 06:47:45 crc kubenswrapper[5017]: E0129 06:47:45.246409 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6\": container with ID starting with bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6 not found: ID does not exist" containerID="bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6" Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.246445 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6"} err="failed to get container status \"bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6\": rpc error: code = NotFound desc = could not find container \"bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6\": container with ID starting with bc8bc4e1d17964d470bae0356199b19c7a840f60ce40153c01bda65cc38e80c6 not found: ID does not exist" Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.256106 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z5brc"] Jan 29 06:47:45 crc kubenswrapper[5017]: I0129 06:47:45.259850 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-z5brc"] Jan 29 06:47:46 crc kubenswrapper[5017]: I0129 06:47:46.233917 5017 generic.go:334] "Generic (PLEG): container finished" podID="8ab2e317-a111-486e-aff4-2bf131383d02" containerID="0d913fc45e302b19cc66a2bc12ae4f88fd8e3bad8f432c63c16684d1dfdd0835" exitCode=0 Jan 29 06:47:46 crc kubenswrapper[5017]: I0129 06:47:46.234049 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" event={"ID":"8ab2e317-a111-486e-aff4-2bf131383d02","Type":"ContainerDied","Data":"0d913fc45e302b19cc66a2bc12ae4f88fd8e3bad8f432c63c16684d1dfdd0835"} Jan 29 06:47:46 crc kubenswrapper[5017]: I0129 06:47:46.333725 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23943ec6-beb6-4bef-b4b1-e5c840ab997b" path="/var/lib/kubelet/pods/23943ec6-beb6-4bef-b4b1-e5c840ab997b/volumes" Jan 29 06:47:47 crc kubenswrapper[5017]: I0129 06:47:47.250091 5017 generic.go:334] "Generic (PLEG): container finished" podID="8ab2e317-a111-486e-aff4-2bf131383d02" containerID="d3fa1effba1aee95739b6ddc88a7daee4c7af179cc3bb6501a8d0aa970be4709" exitCode=0 Jan 29 06:47:47 crc kubenswrapper[5017]: I0129 06:47:47.250245 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" event={"ID":"8ab2e317-a111-486e-aff4-2bf131383d02","Type":"ContainerDied","Data":"d3fa1effba1aee95739b6ddc88a7daee4c7af179cc3bb6501a8d0aa970be4709"} Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.529096 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.553251 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzb8s\" (UniqueName: \"kubernetes.io/projected/8ab2e317-a111-486e-aff4-2bf131383d02-kube-api-access-xzb8s\") pod \"8ab2e317-a111-486e-aff4-2bf131383d02\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.553334 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-util\") pod \"8ab2e317-a111-486e-aff4-2bf131383d02\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.553458 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-bundle\") pod \"8ab2e317-a111-486e-aff4-2bf131383d02\" (UID: \"8ab2e317-a111-486e-aff4-2bf131383d02\") " Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.556855 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-bundle" (OuterVolumeSpecName: "bundle") pod "8ab2e317-a111-486e-aff4-2bf131383d02" (UID: "8ab2e317-a111-486e-aff4-2bf131383d02"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.561244 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab2e317-a111-486e-aff4-2bf131383d02-kube-api-access-xzb8s" (OuterVolumeSpecName: "kube-api-access-xzb8s") pod "8ab2e317-a111-486e-aff4-2bf131383d02" (UID: "8ab2e317-a111-486e-aff4-2bf131383d02"). InnerVolumeSpecName "kube-api-access-xzb8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.568882 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-util" (OuterVolumeSpecName: "util") pod "8ab2e317-a111-486e-aff4-2bf131383d02" (UID: "8ab2e317-a111-486e-aff4-2bf131383d02"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.655844 5017 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.655949 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzb8s\" (UniqueName: \"kubernetes.io/projected/8ab2e317-a111-486e-aff4-2bf131383d02-kube-api-access-xzb8s\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.656004 5017 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab2e317-a111-486e-aff4-2bf131383d02-util\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.829792 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jn5qr"] Jan 29 06:47:48 crc kubenswrapper[5017]: E0129 06:47:48.830045 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab2e317-a111-486e-aff4-2bf131383d02" containerName="extract" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.830059 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab2e317-a111-486e-aff4-2bf131383d02" containerName="extract" Jan 29 06:47:48 crc kubenswrapper[5017]: E0129 06:47:48.830070 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab2e317-a111-486e-aff4-2bf131383d02" containerName="pull" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.830075 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab2e317-a111-486e-aff4-2bf131383d02" containerName="pull" Jan 29 06:47:48 crc kubenswrapper[5017]: E0129 06:47:48.830084 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23943ec6-beb6-4bef-b4b1-e5c840ab997b" containerName="console" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.830091 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="23943ec6-beb6-4bef-b4b1-e5c840ab997b" containerName="console" Jan 29 06:47:48 crc kubenswrapper[5017]: E0129 06:47:48.830102 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab2e317-a111-486e-aff4-2bf131383d02" containerName="util" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.830108 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab2e317-a111-486e-aff4-2bf131383d02" containerName="util" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.830216 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab2e317-a111-486e-aff4-2bf131383d02" containerName="extract" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.830227 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="23943ec6-beb6-4bef-b4b1-e5c840ab997b" containerName="console" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.831095 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.857423 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jn5qr"] Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.859001 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-catalog-content\") pod \"redhat-operators-jn5qr\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.859051 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh657\" (UniqueName: \"kubernetes.io/projected/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-kube-api-access-vh657\") pod \"redhat-operators-jn5qr\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.859097 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-utilities\") pod \"redhat-operators-jn5qr\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.926929 5017 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.960876 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-utilities\") pod \"redhat-operators-jn5qr\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.961049 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-catalog-content\") pod \"redhat-operators-jn5qr\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.961738 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-utilities\") pod \"redhat-operators-jn5qr\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.961795 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh657\" (UniqueName: \"kubernetes.io/projected/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-kube-api-access-vh657\") pod \"redhat-operators-jn5qr\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.961755 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-catalog-content\") pod \"redhat-operators-jn5qr\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:48 crc kubenswrapper[5017]: I0129 06:47:48.984215 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh657\" (UniqueName: \"kubernetes.io/projected/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-kube-api-access-vh657\") pod \"redhat-operators-jn5qr\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:49 crc kubenswrapper[5017]: I0129 06:47:49.161442 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:49 crc kubenswrapper[5017]: I0129 06:47:49.274352 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" event={"ID":"8ab2e317-a111-486e-aff4-2bf131383d02","Type":"ContainerDied","Data":"cf6eb20037bf10701082a492b121a8df3dbbc6c2ec7ec6729d16c080f9b61847"} Jan 29 06:47:49 crc kubenswrapper[5017]: I0129 06:47:49.274406 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf6eb20037bf10701082a492b121a8df3dbbc6c2ec7ec6729d16c080f9b61847" Jan 29 06:47:49 crc kubenswrapper[5017]: I0129 06:47:49.274522 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6" Jan 29 06:47:49 crc kubenswrapper[5017]: I0129 06:47:49.627877 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jn5qr"] Jan 29 06:47:50 crc kubenswrapper[5017]: I0129 06:47:50.283406 5017 generic.go:334] "Generic (PLEG): container finished" podID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerID="3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff" exitCode=0 Jan 29 06:47:50 crc kubenswrapper[5017]: I0129 06:47:50.284013 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn5qr" event={"ID":"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d","Type":"ContainerDied","Data":"3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff"} Jan 29 06:47:50 crc kubenswrapper[5017]: I0129 06:47:50.284060 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn5qr" event={"ID":"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d","Type":"ContainerStarted","Data":"d974724eb4e3ae3dd404dedb0b35d454dca242f24382103f6a24e6891c82b1a5"} Jan 29 06:47:51 crc kubenswrapper[5017]: I0129 06:47:51.292544 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn5qr" event={"ID":"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d","Type":"ContainerStarted","Data":"c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6"} Jan 29 06:47:52 crc kubenswrapper[5017]: I0129 06:47:52.303442 5017 generic.go:334] "Generic (PLEG): container finished" podID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerID="c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6" exitCode=0 Jan 29 06:47:52 crc kubenswrapper[5017]: I0129 06:47:52.303690 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn5qr" event={"ID":"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d","Type":"ContainerDied","Data":"c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6"} Jan 29 06:47:53 crc kubenswrapper[5017]: I0129 06:47:53.315999 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn5qr" event={"ID":"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d","Type":"ContainerStarted","Data":"49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52"} Jan 29 06:47:53 crc kubenswrapper[5017]: I0129 06:47:53.363912 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jn5qr" podStartSLOduration=2.891905783 podStartE2EDuration="5.363885881s" podCreationTimestamp="2026-01-29 06:47:48 +0000 UTC" firstStartedPulling="2026-01-29 06:47:50.285405914 +0000 UTC m=+756.659853514" lastFinishedPulling="2026-01-29 06:47:52.757386002 +0000 UTC m=+759.131833612" observedRunningTime="2026-01-29 06:47:53.356919337 +0000 UTC m=+759.731367007" watchObservedRunningTime="2026-01-29 06:47:53.363885881 +0000 UTC m=+759.738333491" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.539203 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.539852 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.539902 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.540615 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2452b80368892b3776d55e2c528464f9c2a090264bafafd3c6ec1fc4c343226"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.540666 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://b2452b80368892b3776d55e2c528464f9c2a090264bafafd3c6ec1fc4c343226" gracePeriod=600 Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.690352 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-67799b9d-txr78"] Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.691182 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.694907 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2bgcd" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.695497 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.697892 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.699827 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.701770 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.725171 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67799b9d-txr78"] Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.883452 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q6sl\" (UniqueName: \"kubernetes.io/projected/a0243db3-515f-470c-93fe-a2d3e043962e-kube-api-access-4q6sl\") pod \"metallb-operator-controller-manager-67799b9d-txr78\" (UID: \"a0243db3-515f-470c-93fe-a2d3e043962e\") " pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.883587 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0243db3-515f-470c-93fe-a2d3e043962e-apiservice-cert\") pod \"metallb-operator-controller-manager-67799b9d-txr78\" (UID: \"a0243db3-515f-470c-93fe-a2d3e043962e\") " pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.883622 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0243db3-515f-470c-93fe-a2d3e043962e-webhook-cert\") pod \"metallb-operator-controller-manager-67799b9d-txr78\" (UID: \"a0243db3-515f-470c-93fe-a2d3e043962e\") " pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.984849 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0243db3-515f-470c-93fe-a2d3e043962e-apiservice-cert\") pod \"metallb-operator-controller-manager-67799b9d-txr78\" (UID: \"a0243db3-515f-470c-93fe-a2d3e043962e\") " pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.984898 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0243db3-515f-470c-93fe-a2d3e043962e-webhook-cert\") pod \"metallb-operator-controller-manager-67799b9d-txr78\" (UID: \"a0243db3-515f-470c-93fe-a2d3e043962e\") " pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.984954 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q6sl\" (UniqueName: \"kubernetes.io/projected/a0243db3-515f-470c-93fe-a2d3e043962e-kube-api-access-4q6sl\") pod \"metallb-operator-controller-manager-67799b9d-txr78\" (UID: \"a0243db3-515f-470c-93fe-a2d3e043962e\") " pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.993823 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0243db3-515f-470c-93fe-a2d3e043962e-apiservice-cert\") pod \"metallb-operator-controller-manager-67799b9d-txr78\" (UID: \"a0243db3-515f-470c-93fe-a2d3e043962e\") " pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:56 crc kubenswrapper[5017]: I0129 06:47:56.993906 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0243db3-515f-470c-93fe-a2d3e043962e-webhook-cert\") pod \"metallb-operator-controller-manager-67799b9d-txr78\" (UID: \"a0243db3-515f-470c-93fe-a2d3e043962e\") " pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.002837 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q6sl\" (UniqueName: \"kubernetes.io/projected/a0243db3-515f-470c-93fe-a2d3e043962e-kube-api-access-4q6sl\") pod \"metallb-operator-controller-manager-67799b9d-txr78\" (UID: \"a0243db3-515f-470c-93fe-a2d3e043962e\") " pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.007380 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.243266 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w"] Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.244439 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.248436 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.248459 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4kdsr" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.250402 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.276818 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w"] Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.374508 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="b2452b80368892b3776d55e2c528464f9c2a090264bafafd3c6ec1fc4c343226" exitCode=0 Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.374593 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"b2452b80368892b3776d55e2c528464f9c2a090264bafafd3c6ec1fc4c343226"} Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.374678 5017 scope.go:117] "RemoveContainer" containerID="dabff3371a6e63297fd61484134b61895e9de875b2decf10108c281561fa90e3" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.390919 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67799b9d-txr78"] Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.391248 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0714ab86-4883-4185-8f73-167cc7aa1bf0-apiservice-cert\") pod \"metallb-operator-webhook-server-57b844687f-2cx2w\" (UID: \"0714ab86-4883-4185-8f73-167cc7aa1bf0\") " pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.391319 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0714ab86-4883-4185-8f73-167cc7aa1bf0-webhook-cert\") pod \"metallb-operator-webhook-server-57b844687f-2cx2w\" (UID: \"0714ab86-4883-4185-8f73-167cc7aa1bf0\") " pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.391442 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkcw\" (UniqueName: \"kubernetes.io/projected/0714ab86-4883-4185-8f73-167cc7aa1bf0-kube-api-access-8tkcw\") pod \"metallb-operator-webhook-server-57b844687f-2cx2w\" (UID: \"0714ab86-4883-4185-8f73-167cc7aa1bf0\") " pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: W0129 06:47:57.393435 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0243db3_515f_470c_93fe_a2d3e043962e.slice/crio-35e5fe5f66e35519728eba8045f6f9aaa224eac4d8f4163f8fad0179c46c1e94 WatchSource:0}: Error finding container 35e5fe5f66e35519728eba8045f6f9aaa224eac4d8f4163f8fad0179c46c1e94: Status 404 returned error can't find the container with id 35e5fe5f66e35519728eba8045f6f9aaa224eac4d8f4163f8fad0179c46c1e94 Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.493155 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tkcw\" (UniqueName: \"kubernetes.io/projected/0714ab86-4883-4185-8f73-167cc7aa1bf0-kube-api-access-8tkcw\") pod \"metallb-operator-webhook-server-57b844687f-2cx2w\" (UID: \"0714ab86-4883-4185-8f73-167cc7aa1bf0\") " pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.493241 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0714ab86-4883-4185-8f73-167cc7aa1bf0-apiservice-cert\") pod \"metallb-operator-webhook-server-57b844687f-2cx2w\" (UID: \"0714ab86-4883-4185-8f73-167cc7aa1bf0\") " pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.493271 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0714ab86-4883-4185-8f73-167cc7aa1bf0-webhook-cert\") pod \"metallb-operator-webhook-server-57b844687f-2cx2w\" (UID: \"0714ab86-4883-4185-8f73-167cc7aa1bf0\") " pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.500059 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0714ab86-4883-4185-8f73-167cc7aa1bf0-apiservice-cert\") pod \"metallb-operator-webhook-server-57b844687f-2cx2w\" (UID: \"0714ab86-4883-4185-8f73-167cc7aa1bf0\") " pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.502453 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0714ab86-4883-4185-8f73-167cc7aa1bf0-webhook-cert\") pod \"metallb-operator-webhook-server-57b844687f-2cx2w\" (UID: \"0714ab86-4883-4185-8f73-167cc7aa1bf0\") " pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.512873 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tkcw\" (UniqueName: \"kubernetes.io/projected/0714ab86-4883-4185-8f73-167cc7aa1bf0-kube-api-access-8tkcw\") pod \"metallb-operator-webhook-server-57b844687f-2cx2w\" (UID: \"0714ab86-4883-4185-8f73-167cc7aa1bf0\") " pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.567894 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:47:57 crc kubenswrapper[5017]: I0129 06:47:57.814119 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w"] Jan 29 06:47:57 crc kubenswrapper[5017]: W0129 06:47:57.860877 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0714ab86_4883_4185_8f73_167cc7aa1bf0.slice/crio-920dfa468da1c281370731a782a3b4f6873e72750b1db123b6d33f15c68fe22b WatchSource:0}: Error finding container 920dfa468da1c281370731a782a3b4f6873e72750b1db123b6d33f15c68fe22b: Status 404 returned error can't find the container with id 920dfa468da1c281370731a782a3b4f6873e72750b1db123b6d33f15c68fe22b Jan 29 06:47:58 crc kubenswrapper[5017]: I0129 06:47:58.381145 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" event={"ID":"0714ab86-4883-4185-8f73-167cc7aa1bf0","Type":"ContainerStarted","Data":"920dfa468da1c281370731a782a3b4f6873e72750b1db123b6d33f15c68fe22b"} Jan 29 06:47:58 crc kubenswrapper[5017]: I0129 06:47:58.383241 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" event={"ID":"a0243db3-515f-470c-93fe-a2d3e043962e","Type":"ContainerStarted","Data":"35e5fe5f66e35519728eba8045f6f9aaa224eac4d8f4163f8fad0179c46c1e94"} Jan 29 06:47:58 crc kubenswrapper[5017]: I0129 06:47:58.385386 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"8f7da3626486d0c22b65bfd4936f285f08c55d6461ba11e5bddb44e28f11086f"} Jan 29 06:47:59 crc kubenswrapper[5017]: I0129 06:47:59.162809 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:47:59 crc kubenswrapper[5017]: I0129 06:47:59.162868 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:48:00 crc kubenswrapper[5017]: I0129 06:48:00.258024 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jn5qr" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerName="registry-server" probeResult="failure" output=< Jan 29 06:48:00 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 06:48:00 crc kubenswrapper[5017]: > Jan 29 06:48:01 crc kubenswrapper[5017]: I0129 06:48:01.409731 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" event={"ID":"a0243db3-515f-470c-93fe-a2d3e043962e","Type":"ContainerStarted","Data":"600d707f65e4c6ae1a30a8844f11425ca1b490c3d89a1a4f144cb83dd18ea20e"} Jan 29 06:48:01 crc kubenswrapper[5017]: I0129 06:48:01.410305 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:48:01 crc kubenswrapper[5017]: I0129 06:48:01.434179 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" podStartSLOduration=1.823851042 podStartE2EDuration="5.434161745s" podCreationTimestamp="2026-01-29 06:47:56 +0000 UTC" firstStartedPulling="2026-01-29 06:47:57.399456418 +0000 UTC m=+763.773904028" lastFinishedPulling="2026-01-29 06:48:01.009767121 +0000 UTC m=+767.384214731" observedRunningTime="2026-01-29 06:48:01.43100992 +0000 UTC m=+767.805457530" watchObservedRunningTime="2026-01-29 06:48:01.434161745 +0000 UTC m=+767.808609355" Jan 29 06:48:04 crc kubenswrapper[5017]: I0129 06:48:04.435910 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" event={"ID":"0714ab86-4883-4185-8f73-167cc7aa1bf0","Type":"ContainerStarted","Data":"63b208c74b897f26c1ba72b589480bd2e1286e18811bdd30e8a92d3b0ae9aee3"} Jan 29 06:48:04 crc kubenswrapper[5017]: I0129 06:48:04.436921 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:48:04 crc kubenswrapper[5017]: I0129 06:48:04.461927 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" podStartSLOduration=1.96315895 podStartE2EDuration="7.46190268s" podCreationTimestamp="2026-01-29 06:47:57 +0000 UTC" firstStartedPulling="2026-01-29 06:47:57.864694401 +0000 UTC m=+764.239142011" lastFinishedPulling="2026-01-29 06:48:03.363438141 +0000 UTC m=+769.737885741" observedRunningTime="2026-01-29 06:48:04.454108668 +0000 UTC m=+770.828556308" watchObservedRunningTime="2026-01-29 06:48:04.46190268 +0000 UTC m=+770.836350290" Jan 29 06:48:09 crc kubenswrapper[5017]: I0129 06:48:09.206007 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:48:09 crc kubenswrapper[5017]: I0129 06:48:09.255093 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:48:11 crc kubenswrapper[5017]: I0129 06:48:11.811835 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jn5qr"] Jan 29 06:48:11 crc kubenswrapper[5017]: I0129 06:48:11.812119 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jn5qr" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerName="registry-server" containerID="cri-o://49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52" gracePeriod=2 Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.302943 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.368281 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-catalog-content\") pod \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.368351 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-utilities\") pod \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.368489 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh657\" (UniqueName: \"kubernetes.io/projected/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-kube-api-access-vh657\") pod \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\" (UID: \"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d\") " Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.369266 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-utilities" (OuterVolumeSpecName: "utilities") pod "f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" (UID: "f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.387179 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-kube-api-access-vh657" (OuterVolumeSpecName: "kube-api-access-vh657") pod "f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" (UID: "f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d"). InnerVolumeSpecName "kube-api-access-vh657". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.470080 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh657\" (UniqueName: \"kubernetes.io/projected/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-kube-api-access-vh657\") on node \"crc\" DevicePath \"\"" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.470126 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.496973 5017 generic.go:334] "Generic (PLEG): container finished" podID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerID="49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52" exitCode=0 Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.497033 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn5qr" event={"ID":"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d","Type":"ContainerDied","Data":"49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52"} Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.497067 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn5qr" event={"ID":"f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d","Type":"ContainerDied","Data":"d974724eb4e3ae3dd404dedb0b35d454dca242f24382103f6a24e6891c82b1a5"} Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.497088 5017 scope.go:117] "RemoveContainer" containerID="49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.497231 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn5qr" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.523848 5017 scope.go:117] "RemoveContainer" containerID="c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.525316 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" (UID: "f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.562593 5017 scope.go:117] "RemoveContainer" containerID="3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.572184 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.579637 5017 scope.go:117] "RemoveContainer" containerID="49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52" Jan 29 06:48:12 crc kubenswrapper[5017]: E0129 06:48:12.580178 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52\": container with ID starting with 49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52 not found: ID does not exist" containerID="49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.580228 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52"} err="failed to get container status \"49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52\": rpc error: code = NotFound desc = could not find container \"49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52\": container with ID starting with 49ef703ad90532ba8554d044dc8504d0ec896b80072633130ffd6e8b31d7cc52 not found: ID does not exist" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.580267 5017 scope.go:117] "RemoveContainer" containerID="c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6" Jan 29 06:48:12 crc kubenswrapper[5017]: E0129 06:48:12.580647 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6\": container with ID starting with c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6 not found: ID does not exist" containerID="c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.580667 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6"} err="failed to get container status \"c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6\": rpc error: code = NotFound desc = could not find container \"c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6\": container with ID starting with c9105c8b98d93494f74c990993656691024ef3ab2e062fb7b16ee1bff37770d6 not found: ID does not exist" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.580679 5017 scope.go:117] "RemoveContainer" containerID="3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff" Jan 29 06:48:12 crc kubenswrapper[5017]: E0129 06:48:12.581094 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff\": container with ID starting with 3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff not found: ID does not exist" containerID="3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.581123 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff"} err="failed to get container status \"3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff\": rpc error: code = NotFound desc = could not find container \"3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff\": container with ID starting with 3073c85290fae6423b0646d0c405ed1037e425046a975ef1b5e4b306cdcac1ff not found: ID does not exist" Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.826519 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jn5qr"] Jan 29 06:48:12 crc kubenswrapper[5017]: I0129 06:48:12.830541 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jn5qr"] Jan 29 06:48:14 crc kubenswrapper[5017]: I0129 06:48:14.343934 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" path="/var/lib/kubelet/pods/f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d/volumes" Jan 29 06:48:17 crc kubenswrapper[5017]: I0129 06:48:17.572686 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-57b844687f-2cx2w" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.012188 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67799b9d-txr78" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.684106 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58"] Jan 29 06:48:37 crc kubenswrapper[5017]: E0129 06:48:37.684391 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerName="extract-content" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.684408 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerName="extract-content" Jan 29 06:48:37 crc kubenswrapper[5017]: E0129 06:48:37.684418 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerName="registry-server" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.684425 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerName="registry-server" Jan 29 06:48:37 crc kubenswrapper[5017]: E0129 06:48:37.684445 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerName="extract-utilities" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.684452 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerName="extract-utilities" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.684574 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c53a9d-0b62-4cc8-8bda-53c7cfdaf60d" containerName="registry-server" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.685049 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.688211 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.691257 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8zsm5" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.695525 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cvz64"] Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.698913 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.698980 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58"] Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.700994 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.702505 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.762528 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-48rj7"] Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.763550 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.765987 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8zcs5" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.766246 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.766912 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.771247 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772685 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2f35677d-147b-4d27-ac32-ab82b1ec29db-frr-startup\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772727 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hg8\" (UniqueName: \"kubernetes.io/projected/2f35677d-147b-4d27-ac32-ab82b1ec29db-kube-api-access-s4hg8\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772761 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1a51c751-7496-418f-ad06-10d8db26b0f6-metallb-excludel2\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772794 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jvd\" (UniqueName: \"kubernetes.io/projected/4b622c7b-c02d-4238-825f-daa2fd5879ca-kube-api-access-d2jvd\") pod \"frr-k8s-webhook-server-7df86c4f6c-cpp58\" (UID: \"4b622c7b-c02d-4238-825f-daa2fd5879ca\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772819 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnckh\" (UniqueName: \"kubernetes.io/projected/1a51c751-7496-418f-ad06-10d8db26b0f6-kube-api-access-lnckh\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772843 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-metrics-certs\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772881 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-memberlist\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772902 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f35677d-147b-4d27-ac32-ab82b1ec29db-metrics-certs\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772935 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-metrics\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772980 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b622c7b-c02d-4238-825f-daa2fd5879ca-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-cpp58\" (UID: \"4b622c7b-c02d-4238-825f-daa2fd5879ca\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.772996 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-frr-sockets\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.773010 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-frr-conf\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.773025 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-reloader\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.797632 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8vxt7"] Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.798905 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.800947 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.817874 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8vxt7"] Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874058 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-metrics-certs\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874132 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-memberlist\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: E0129 06:48:37.874250 5017 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 06:48:37 crc kubenswrapper[5017]: E0129 06:48:37.874295 5017 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 06:48:37 crc kubenswrapper[5017]: E0129 06:48:37.874386 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-metrics-certs podName:1a51c751-7496-418f-ad06-10d8db26b0f6 nodeName:}" failed. No retries permitted until 2026-01-29 06:48:38.374363078 +0000 UTC m=+804.748810688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-metrics-certs") pod "speaker-48rj7" (UID: "1a51c751-7496-418f-ad06-10d8db26b0f6") : secret "speaker-certs-secret" not found Jan 29 06:48:37 crc kubenswrapper[5017]: E0129 06:48:37.874535 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-memberlist podName:1a51c751-7496-418f-ad06-10d8db26b0f6 nodeName:}" failed. No retries permitted until 2026-01-29 06:48:38.374507901 +0000 UTC m=+804.748955701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-memberlist") pod "speaker-48rj7" (UID: "1a51c751-7496-418f-ad06-10d8db26b0f6") : secret "metallb-memberlist" not found Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874663 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f35677d-147b-4d27-ac32-ab82b1ec29db-metrics-certs\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874720 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-metrics\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: E0129 06:48:37.874740 5017 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874766 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-frr-sockets\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: E0129 06:48:37.874780 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f35677d-147b-4d27-ac32-ab82b1ec29db-metrics-certs podName:2f35677d-147b-4d27-ac32-ab82b1ec29db nodeName:}" failed. No retries permitted until 2026-01-29 06:48:38.374770147 +0000 UTC m=+804.749217917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2f35677d-147b-4d27-ac32-ab82b1ec29db-metrics-certs") pod "frr-k8s-cvz64" (UID: "2f35677d-147b-4d27-ac32-ab82b1ec29db") : secret "frr-k8s-certs-secret" not found Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874801 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b622c7b-c02d-4238-825f-daa2fd5879ca-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-cpp58\" (UID: \"4b622c7b-c02d-4238-825f-daa2fd5879ca\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874826 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-frr-conf\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874844 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-reloader\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874876 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2f35677d-147b-4d27-ac32-ab82b1ec29db-frr-startup\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874896 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hg8\" (UniqueName: \"kubernetes.io/projected/2f35677d-147b-4d27-ac32-ab82b1ec29db-kube-api-access-s4hg8\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874925 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1a51c751-7496-418f-ad06-10d8db26b0f6-metallb-excludel2\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.874969 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2jvd\" (UniqueName: \"kubernetes.io/projected/4b622c7b-c02d-4238-825f-daa2fd5879ca-kube-api-access-d2jvd\") pod \"frr-k8s-webhook-server-7df86c4f6c-cpp58\" (UID: \"4b622c7b-c02d-4238-825f-daa2fd5879ca\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.875000 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnckh\" (UniqueName: \"kubernetes.io/projected/1a51c751-7496-418f-ad06-10d8db26b0f6-kube-api-access-lnckh\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.875424 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-reloader\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.875753 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-frr-sockets\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.876005 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-metrics\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.876314 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2f35677d-147b-4d27-ac32-ab82b1ec29db-frr-conf\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.876398 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2f35677d-147b-4d27-ac32-ab82b1ec29db-frr-startup\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.876975 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1a51c751-7496-418f-ad06-10d8db26b0f6-metallb-excludel2\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.886949 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b622c7b-c02d-4238-825f-daa2fd5879ca-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-cpp58\" (UID: \"4b622c7b-c02d-4238-825f-daa2fd5879ca\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.895546 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnckh\" (UniqueName: \"kubernetes.io/projected/1a51c751-7496-418f-ad06-10d8db26b0f6-kube-api-access-lnckh\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.915536 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hg8\" (UniqueName: \"kubernetes.io/projected/2f35677d-147b-4d27-ac32-ab82b1ec29db-kube-api-access-s4hg8\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.915804 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2jvd\" (UniqueName: \"kubernetes.io/projected/4b622c7b-c02d-4238-825f-daa2fd5879ca-kube-api-access-d2jvd\") pod \"frr-k8s-webhook-server-7df86c4f6c-cpp58\" (UID: \"4b622c7b-c02d-4238-825f-daa2fd5879ca\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.976058 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhdt9\" (UniqueName: \"kubernetes.io/projected/632ce719-ce24-4b7c-855b-1b348732dc19-kube-api-access-mhdt9\") pod \"controller-6968d8fdc4-8vxt7\" (UID: \"632ce719-ce24-4b7c-855b-1b348732dc19\") " pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.976171 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ce719-ce24-4b7c-855b-1b348732dc19-metrics-certs\") pod \"controller-6968d8fdc4-8vxt7\" (UID: \"632ce719-ce24-4b7c-855b-1b348732dc19\") " pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:37 crc kubenswrapper[5017]: I0129 06:48:37.976376 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/632ce719-ce24-4b7c-855b-1b348732dc19-cert\") pod \"controller-6968d8fdc4-8vxt7\" (UID: \"632ce719-ce24-4b7c-855b-1b348732dc19\") " pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.008312 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.078371 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhdt9\" (UniqueName: \"kubernetes.io/projected/632ce719-ce24-4b7c-855b-1b348732dc19-kube-api-access-mhdt9\") pod \"controller-6968d8fdc4-8vxt7\" (UID: \"632ce719-ce24-4b7c-855b-1b348732dc19\") " pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.078930 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ce719-ce24-4b7c-855b-1b348732dc19-metrics-certs\") pod \"controller-6968d8fdc4-8vxt7\" (UID: \"632ce719-ce24-4b7c-855b-1b348732dc19\") " pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.079002 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/632ce719-ce24-4b7c-855b-1b348732dc19-cert\") pod \"controller-6968d8fdc4-8vxt7\" (UID: \"632ce719-ce24-4b7c-855b-1b348732dc19\") " pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.084687 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/632ce719-ce24-4b7c-855b-1b348732dc19-metrics-certs\") pod \"controller-6968d8fdc4-8vxt7\" (UID: \"632ce719-ce24-4b7c-855b-1b348732dc19\") " pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.089344 5017 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.104680 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/632ce719-ce24-4b7c-855b-1b348732dc19-cert\") pod \"controller-6968d8fdc4-8vxt7\" (UID: \"632ce719-ce24-4b7c-855b-1b348732dc19\") " pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.120735 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhdt9\" (UniqueName: \"kubernetes.io/projected/632ce719-ce24-4b7c-855b-1b348732dc19-kube-api-access-mhdt9\") pod \"controller-6968d8fdc4-8vxt7\" (UID: \"632ce719-ce24-4b7c-855b-1b348732dc19\") " pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.414727 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:38 crc kubenswrapper[5017]: E0129 06:48:38.448100 5017 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 06:48:38 crc kubenswrapper[5017]: E0129 06:48:38.448215 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-metrics-certs podName:1a51c751-7496-418f-ad06-10d8db26b0f6 nodeName:}" failed. No retries permitted until 2026-01-29 06:48:39.44817943 +0000 UTC m=+805.822627040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-metrics-certs") pod "speaker-48rj7" (UID: "1a51c751-7496-418f-ad06-10d8db26b0f6") : secret "speaker-certs-secret" not found Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.448909 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-metrics-certs\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.449001 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-memberlist\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.449042 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f35677d-147b-4d27-ac32-ab82b1ec29db-metrics-certs\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:38 crc kubenswrapper[5017]: E0129 06:48:38.449167 5017 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 06:48:38 crc kubenswrapper[5017]: E0129 06:48:38.449221 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-memberlist podName:1a51c751-7496-418f-ad06-10d8db26b0f6 nodeName:}" failed. No retries permitted until 2026-01-29 06:48:39.449204804 +0000 UTC m=+805.823652414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-memberlist") pod "speaker-48rj7" (UID: "1a51c751-7496-418f-ad06-10d8db26b0f6") : secret "metallb-memberlist" not found Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.456544 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f35677d-147b-4d27-ac32-ab82b1ec29db-metrics-certs\") pod \"frr-k8s-cvz64\" (UID: \"2f35677d-147b-4d27-ac32-ab82b1ec29db\") " pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.556189 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58"] Jan 29 06:48:38 crc kubenswrapper[5017]: W0129 06:48:38.564072 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b622c7b_c02d_4238_825f_daa2fd5879ca.slice/crio-dba7c103091f97c73553fd988d04a2239c67baf14fea943a68b1c7b7fb1f8c10 WatchSource:0}: Error finding container dba7c103091f97c73553fd988d04a2239c67baf14fea943a68b1c7b7fb1f8c10: Status 404 returned error can't find the container with id dba7c103091f97c73553fd988d04a2239c67baf14fea943a68b1c7b7fb1f8c10 Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.630390 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.642844 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8vxt7"] Jan 29 06:48:38 crc kubenswrapper[5017]: W0129 06:48:38.647891 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod632ce719_ce24_4b7c_855b_1b348732dc19.slice/crio-2d5fe5275004a50b30c4d4a7b03e50f556a49589847154d172336e436297d848 WatchSource:0}: Error finding container 2d5fe5275004a50b30c4d4a7b03e50f556a49589847154d172336e436297d848: Status 404 returned error can't find the container with id 2d5fe5275004a50b30c4d4a7b03e50f556a49589847154d172336e436297d848 Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.692200 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" event={"ID":"4b622c7b-c02d-4238-825f-daa2fd5879ca","Type":"ContainerStarted","Data":"dba7c103091f97c73553fd988d04a2239c67baf14fea943a68b1c7b7fb1f8c10"} Jan 29 06:48:38 crc kubenswrapper[5017]: I0129 06:48:38.694285 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8vxt7" event={"ID":"632ce719-ce24-4b7c-855b-1b348732dc19","Type":"ContainerStarted","Data":"2d5fe5275004a50b30c4d4a7b03e50f556a49589847154d172336e436297d848"} Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.467846 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-metrics-certs\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.468507 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-memberlist\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.474727 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-metrics-certs\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.475848 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a51c751-7496-418f-ad06-10d8db26b0f6-memberlist\") pod \"speaker-48rj7\" (UID: \"1a51c751-7496-418f-ad06-10d8db26b0f6\") " pod="metallb-system/speaker-48rj7" Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.580003 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-48rj7" Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.703732 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8vxt7" event={"ID":"632ce719-ce24-4b7c-855b-1b348732dc19","Type":"ContainerStarted","Data":"743eeae3ccf18559dccad5066cf6113faf73c00578067153a96470ea794eca35"} Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.703791 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8vxt7" event={"ID":"632ce719-ce24-4b7c-855b-1b348732dc19","Type":"ContainerStarted","Data":"ec42c7260cda8c7ffb97aa2e9a74f7dcdbaa0e3efcc3cd2b2b94104169c746f7"} Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.703916 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.705988 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerStarted","Data":"8670e6c691acf14640b832e5793a883bf1e721986203dd23a67ffd4bdec97639"} Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.707265 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-48rj7" event={"ID":"1a51c751-7496-418f-ad06-10d8db26b0f6","Type":"ContainerStarted","Data":"fe23a51e6c315e2a92295c3989c2eff1305033e788fce2526d8b4d28fb947c8c"} Jan 29 06:48:39 crc kubenswrapper[5017]: I0129 06:48:39.721577 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8vxt7" podStartSLOduration=2.721557896 podStartE2EDuration="2.721557896s" podCreationTimestamp="2026-01-29 06:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:48:39.720881151 +0000 UTC m=+806.095328781" watchObservedRunningTime="2026-01-29 06:48:39.721557896 +0000 UTC m=+806.096005506" Jan 29 06:48:40 crc kubenswrapper[5017]: I0129 06:48:40.719785 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-48rj7" event={"ID":"1a51c751-7496-418f-ad06-10d8db26b0f6","Type":"ContainerStarted","Data":"7ab22a296dfc24b9acd506d77ef3fba3c1a0ad5930e89e7e6be78cd7a35c2a2b"} Jan 29 06:48:40 crc kubenswrapper[5017]: I0129 06:48:40.720185 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-48rj7" event={"ID":"1a51c751-7496-418f-ad06-10d8db26b0f6","Type":"ContainerStarted","Data":"652fb095a9f06148f44f5030a3fd92187c59a0bf126e5697e139d5c00d9c1110"} Jan 29 06:48:40 crc kubenswrapper[5017]: I0129 06:48:40.740339 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-48rj7" podStartSLOduration=3.740320315 podStartE2EDuration="3.740320315s" podCreationTimestamp="2026-01-29 06:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:48:40.739669679 +0000 UTC m=+807.114117289" watchObservedRunningTime="2026-01-29 06:48:40.740320315 +0000 UTC m=+807.114767925" Jan 29 06:48:41 crc kubenswrapper[5017]: I0129 06:48:41.731833 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-48rj7" Jan 29 06:48:46 crc kubenswrapper[5017]: I0129 06:48:46.782599 5017 generic.go:334] "Generic (PLEG): container finished" podID="2f35677d-147b-4d27-ac32-ab82b1ec29db" containerID="1bee2800c52e75f8274b4b052c92fc5635e411316fead1e4eb349bcc960da094" exitCode=0 Jan 29 06:48:46 crc kubenswrapper[5017]: I0129 06:48:46.782737 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerDied","Data":"1bee2800c52e75f8274b4b052c92fc5635e411316fead1e4eb349bcc960da094"} Jan 29 06:48:46 crc kubenswrapper[5017]: I0129 06:48:46.786223 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" event={"ID":"4b622c7b-c02d-4238-825f-daa2fd5879ca","Type":"ContainerStarted","Data":"db9dbf19e08b88f5c28df2d9da2b525073014f3d24fa0dc62353f15e2dd2bce2"} Jan 29 06:48:46 crc kubenswrapper[5017]: I0129 06:48:46.786499 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:47 crc kubenswrapper[5017]: I0129 06:48:47.800320 5017 generic.go:334] "Generic (PLEG): container finished" podID="2f35677d-147b-4d27-ac32-ab82b1ec29db" containerID="6a384a1a35b6accea049b14246285056f79ea5a21e6048a23f6a4b798dddcc97" exitCode=0 Jan 29 06:48:47 crc kubenswrapper[5017]: I0129 06:48:47.802455 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerDied","Data":"6a384a1a35b6accea049b14246285056f79ea5a21e6048a23f6a4b798dddcc97"} Jan 29 06:48:47 crc kubenswrapper[5017]: I0129 06:48:47.859295 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" podStartSLOduration=2.88660512 podStartE2EDuration="10.859264993s" podCreationTimestamp="2026-01-29 06:48:37 +0000 UTC" firstStartedPulling="2026-01-29 06:48:38.566488037 +0000 UTC m=+804.940935647" lastFinishedPulling="2026-01-29 06:48:46.53914788 +0000 UTC m=+812.913595520" observedRunningTime="2026-01-29 06:48:46.832694642 +0000 UTC m=+813.207142262" watchObservedRunningTime="2026-01-29 06:48:47.859264993 +0000 UTC m=+814.233712633" Jan 29 06:48:48 crc kubenswrapper[5017]: I0129 06:48:48.421718 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8vxt7" Jan 29 06:48:48 crc kubenswrapper[5017]: I0129 06:48:48.810498 5017 generic.go:334] "Generic (PLEG): container finished" podID="2f35677d-147b-4d27-ac32-ab82b1ec29db" containerID="76dd0c8c8a81112e3a4acc46a232acb3b6d5995238faefcff872c9e736b8a0a6" exitCode=0 Jan 29 06:48:48 crc kubenswrapper[5017]: I0129 06:48:48.810580 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerDied","Data":"76dd0c8c8a81112e3a4acc46a232acb3b6d5995238faefcff872c9e736b8a0a6"} Jan 29 06:48:49 crc kubenswrapper[5017]: I0129 06:48:49.585159 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-48rj7" Jan 29 06:48:49 crc kubenswrapper[5017]: I0129 06:48:49.827908 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerStarted","Data":"f08a949746883af3b92c1ad787bf411e66c3df1e8a079c7d70d588619d43f88e"} Jan 29 06:48:49 crc kubenswrapper[5017]: I0129 06:48:49.827999 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerStarted","Data":"2bbbf571fbc626c81a838ad5468d5506baacd44cbe7b5bd36718a48f59376186"} Jan 29 06:48:49 crc kubenswrapper[5017]: I0129 06:48:49.828016 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerStarted","Data":"64e80761f780da857b225151e48fd97ab4793c455158643ef368d341ef620cf1"} Jan 29 06:48:49 crc kubenswrapper[5017]: I0129 06:48:49.828029 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerStarted","Data":"8862d15863c0cffe527baabe5e65c55dd50d268ddf5fd56eddfceb96fedbbdeb"} Jan 29 06:48:49 crc kubenswrapper[5017]: I0129 06:48:49.828041 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerStarted","Data":"cce5986eec35bab61dbba7890eadc129e430f7186c65e54189b23d3e2215f2bc"} Jan 29 06:48:50 crc kubenswrapper[5017]: I0129 06:48:50.838310 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cvz64" event={"ID":"2f35677d-147b-4d27-ac32-ab82b1ec29db","Type":"ContainerStarted","Data":"23ebf1d41b57374b7ede4c99dd94f159923779cc3505bf2d616a093753656354"} Jan 29 06:48:50 crc kubenswrapper[5017]: I0129 06:48:50.838785 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:50 crc kubenswrapper[5017]: I0129 06:48:50.863761 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cvz64" podStartSLOduration=6.076989726 podStartE2EDuration="13.863736703s" podCreationTimestamp="2026-01-29 06:48:37 +0000 UTC" firstStartedPulling="2026-01-29 06:48:38.756640912 +0000 UTC m=+805.131088522" lastFinishedPulling="2026-01-29 06:48:46.543387859 +0000 UTC m=+812.917835499" observedRunningTime="2026-01-29 06:48:50.858780196 +0000 UTC m=+817.233227816" watchObservedRunningTime="2026-01-29 06:48:50.863736703 +0000 UTC m=+817.238184323" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.139427 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988"] Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.140724 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.144701 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.155474 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988"] Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.170218 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r9lp\" (UniqueName: \"kubernetes.io/projected/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-kube-api-access-5r9lp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.170307 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.170339 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.271430 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.271510 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.271558 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r9lp\" (UniqueName: \"kubernetes.io/projected/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-kube-api-access-5r9lp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.272248 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.272255 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.296302 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r9lp\" (UniqueName: \"kubernetes.io/projected/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-kube-api-access-5r9lp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.457265 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.723526 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988"] Jan 29 06:48:51 crc kubenswrapper[5017]: W0129 06:48:51.732164 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecf4fcf6_9f93_4a65_a6e0_442cc5cd14b4.slice/crio-22c1100c102840befd6a3ade92344ea268d3023999d7e4dde7ab4b8c608350d6 WatchSource:0}: Error finding container 22c1100c102840befd6a3ade92344ea268d3023999d7e4dde7ab4b8c608350d6: Status 404 returned error can't find the container with id 22c1100c102840befd6a3ade92344ea268d3023999d7e4dde7ab4b8c608350d6 Jan 29 06:48:51 crc kubenswrapper[5017]: I0129 06:48:51.854251 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" event={"ID":"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4","Type":"ContainerStarted","Data":"22c1100c102840befd6a3ade92344ea268d3023999d7e4dde7ab4b8c608350d6"} Jan 29 06:48:52 crc kubenswrapper[5017]: I0129 06:48:52.863820 5017 generic.go:334] "Generic (PLEG): container finished" podID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerID="85e63f74af6f628770543804ac2425247b383dfa7044d4ad3a98b83397f907fd" exitCode=0 Jan 29 06:48:52 crc kubenswrapper[5017]: I0129 06:48:52.863939 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" event={"ID":"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4","Type":"ContainerDied","Data":"85e63f74af6f628770543804ac2425247b383dfa7044d4ad3a98b83397f907fd"} Jan 29 06:48:53 crc kubenswrapper[5017]: I0129 06:48:53.632235 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:53 crc kubenswrapper[5017]: I0129 06:48:53.672443 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:56 crc kubenswrapper[5017]: I0129 06:48:56.911772 5017 generic.go:334] "Generic (PLEG): container finished" podID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerID="d1c03b02cdb920733118b4259ed279f030a0aeb56e2b48b08a7a3ab393889dcb" exitCode=0 Jan 29 06:48:56 crc kubenswrapper[5017]: I0129 06:48:56.911988 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" event={"ID":"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4","Type":"ContainerDied","Data":"d1c03b02cdb920733118b4259ed279f030a0aeb56e2b48b08a7a3ab393889dcb"} Jan 29 06:48:57 crc kubenswrapper[5017]: I0129 06:48:57.925506 5017 generic.go:334] "Generic (PLEG): container finished" podID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerID="d909aee421ee3e84d348a8ba1d41c7205c40c846677432189c2c32b4c8a6c530" exitCode=0 Jan 29 06:48:57 crc kubenswrapper[5017]: I0129 06:48:57.925580 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" event={"ID":"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4","Type":"ContainerDied","Data":"d909aee421ee3e84d348a8ba1d41c7205c40c846677432189c2c32b4c8a6c530"} Jan 29 06:48:58 crc kubenswrapper[5017]: I0129 06:48:58.014466 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cpp58" Jan 29 06:48:58 crc kubenswrapper[5017]: I0129 06:48:58.634789 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cvz64" Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.192910 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.319066 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-bundle\") pod \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.319163 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r9lp\" (UniqueName: \"kubernetes.io/projected/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-kube-api-access-5r9lp\") pod \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.319266 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-util\") pod \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\" (UID: \"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4\") " Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.320623 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-bundle" (OuterVolumeSpecName: "bundle") pod "ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" (UID: "ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.325886 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-kube-api-access-5r9lp" (OuterVolumeSpecName: "kube-api-access-5r9lp") pod "ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" (UID: "ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4"). InnerVolumeSpecName "kube-api-access-5r9lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.330458 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-util" (OuterVolumeSpecName: "util") pod "ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" (UID: "ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.421139 5017 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-util\") on node \"crc\" DevicePath \"\"" Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.421192 5017 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.421203 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r9lp\" (UniqueName: \"kubernetes.io/projected/ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4-kube-api-access-5r9lp\") on node \"crc\" DevicePath \"\"" Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.942731 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" event={"ID":"ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4","Type":"ContainerDied","Data":"22c1100c102840befd6a3ade92344ea268d3023999d7e4dde7ab4b8c608350d6"} Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.943286 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22c1100c102840befd6a3ade92344ea268d3023999d7e4dde7ab4b8c608350d6" Jan 29 06:48:59 crc kubenswrapper[5017]: I0129 06:48:59.942804 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.675207 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws"] Jan 29 06:49:03 crc kubenswrapper[5017]: E0129 06:49:03.676044 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerName="extract" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.676062 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerName="extract" Jan 29 06:49:03 crc kubenswrapper[5017]: E0129 06:49:03.676080 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerName="pull" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.676088 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerName="pull" Jan 29 06:49:03 crc kubenswrapper[5017]: E0129 06:49:03.676106 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerName="util" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.676120 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerName="util" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.676266 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4" containerName="extract" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.676876 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.679259 5017 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-n2wgw" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.682779 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.684661 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.727832 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws"] Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.796246 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgbpt\" (UniqueName: \"kubernetes.io/projected/580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347-kube-api-access-sgbpt\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lfbws\" (UID: \"580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.796312 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lfbws\" (UID: \"580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.897874 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgbpt\" (UniqueName: \"kubernetes.io/projected/580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347-kube-api-access-sgbpt\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lfbws\" (UID: \"580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.897937 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lfbws\" (UID: \"580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.898733 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lfbws\" (UID: \"580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" Jan 29 06:49:03 crc kubenswrapper[5017]: I0129 06:49:03.920080 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgbpt\" (UniqueName: \"kubernetes.io/projected/580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347-kube-api-access-sgbpt\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lfbws\" (UID: \"580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" Jan 29 06:49:04 crc kubenswrapper[5017]: I0129 06:49:04.012865 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" Jan 29 06:49:04 crc kubenswrapper[5017]: I0129 06:49:04.290826 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws"] Jan 29 06:49:04 crc kubenswrapper[5017]: I0129 06:49:04.979169 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" event={"ID":"580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347","Type":"ContainerStarted","Data":"b32cce18933263e97a3bd92115bf70d14622a48f2fc0ee69827e739015c71501"} Jan 29 06:49:08 crc kubenswrapper[5017]: I0129 06:49:08.017534 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" event={"ID":"580c2a9a-8d5c-4b03-a1f6-0fdcfcf57347","Type":"ContainerStarted","Data":"d8f7b1b18d85b1d0f7a4acf4bb874805b5b466401b75c58db1b08ddcfddeef33"} Jan 29 06:49:08 crc kubenswrapper[5017]: I0129 06:49:08.057093 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lfbws" podStartSLOduration=2.504742176 podStartE2EDuration="5.057054888s" podCreationTimestamp="2026-01-29 06:49:03 +0000 UTC" firstStartedPulling="2026-01-29 06:49:04.319949039 +0000 UTC m=+830.694396649" lastFinishedPulling="2026-01-29 06:49:06.872261751 +0000 UTC m=+833.246709361" observedRunningTime="2026-01-29 06:49:08.043713015 +0000 UTC m=+834.418160625" watchObservedRunningTime="2026-01-29 06:49:08.057054888 +0000 UTC m=+834.431502518" Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.754577 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-ttkcb"] Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.756110 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.759864 5017 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-67mtf" Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.760185 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.760341 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.770595 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-ttkcb"] Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.853500 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50c399e-cf7e-4906-82b7-44e8925508c1-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-ttkcb\" (UID: \"a50c399e-cf7e-4906-82b7-44e8925508c1\") " pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.853866 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrbdz\" (UniqueName: \"kubernetes.io/projected/a50c399e-cf7e-4906-82b7-44e8925508c1-kube-api-access-lrbdz\") pod \"cert-manager-webhook-6888856db4-ttkcb\" (UID: \"a50c399e-cf7e-4906-82b7-44e8925508c1\") " pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.955151 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrbdz\" (UniqueName: \"kubernetes.io/projected/a50c399e-cf7e-4906-82b7-44e8925508c1-kube-api-access-lrbdz\") pod \"cert-manager-webhook-6888856db4-ttkcb\" (UID: \"a50c399e-cf7e-4906-82b7-44e8925508c1\") " pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.955255 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50c399e-cf7e-4906-82b7-44e8925508c1-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-ttkcb\" (UID: \"a50c399e-cf7e-4906-82b7-44e8925508c1\") " pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:13 crc kubenswrapper[5017]: I0129 06:49:13.998427 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrbdz\" (UniqueName: \"kubernetes.io/projected/a50c399e-cf7e-4906-82b7-44e8925508c1-kube-api-access-lrbdz\") pod \"cert-manager-webhook-6888856db4-ttkcb\" (UID: \"a50c399e-cf7e-4906-82b7-44e8925508c1\") " pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.001887 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50c399e-cf7e-4906-82b7-44e8925508c1-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-ttkcb\" (UID: \"a50c399e-cf7e-4906-82b7-44e8925508c1\") " pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.069075 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-kt86q"] Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.070400 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.077156 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.078462 5017 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-k9jw2" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.095487 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-kt86q"] Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.160433 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58qrp\" (UniqueName: \"kubernetes.io/projected/a5489a52-a692-4be5-ad55-f4e3607180e9-kube-api-access-58qrp\") pod \"cert-manager-cainjector-5545bd876-kt86q\" (UID: \"a5489a52-a692-4be5-ad55-f4e3607180e9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.160520 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5489a52-a692-4be5-ad55-f4e3607180e9-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-kt86q\" (UID: \"a5489a52-a692-4be5-ad55-f4e3607180e9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.262197 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5489a52-a692-4be5-ad55-f4e3607180e9-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-kt86q\" (UID: \"a5489a52-a692-4be5-ad55-f4e3607180e9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.262290 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58qrp\" (UniqueName: \"kubernetes.io/projected/a5489a52-a692-4be5-ad55-f4e3607180e9-kube-api-access-58qrp\") pod \"cert-manager-cainjector-5545bd876-kt86q\" (UID: \"a5489a52-a692-4be5-ad55-f4e3607180e9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.286683 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58qrp\" (UniqueName: \"kubernetes.io/projected/a5489a52-a692-4be5-ad55-f4e3607180e9-kube-api-access-58qrp\") pod \"cert-manager-cainjector-5545bd876-kt86q\" (UID: \"a5489a52-a692-4be5-ad55-f4e3607180e9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.288190 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5489a52-a692-4be5-ad55-f4e3607180e9-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-kt86q\" (UID: \"a5489a52-a692-4be5-ad55-f4e3607180e9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.393891 5017 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-k9jw2" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.402517 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.584360 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-ttkcb"] Jan 29 06:49:14 crc kubenswrapper[5017]: I0129 06:49:14.653560 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-kt86q"] Jan 29 06:49:14 crc kubenswrapper[5017]: W0129 06:49:14.657364 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5489a52_a692_4be5_ad55_f4e3607180e9.slice/crio-e6b7729e0313f1f675529beef515c100ff0dffc2565a4312f84cef1c35444aba WatchSource:0}: Error finding container e6b7729e0313f1f675529beef515c100ff0dffc2565a4312f84cef1c35444aba: Status 404 returned error can't find the container with id e6b7729e0313f1f675529beef515c100ff0dffc2565a4312f84cef1c35444aba Jan 29 06:49:15 crc kubenswrapper[5017]: I0129 06:49:15.063000 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" event={"ID":"a50c399e-cf7e-4906-82b7-44e8925508c1","Type":"ContainerStarted","Data":"808633e15394d3c24924cda89344a1260b1193e59cb8c94d38c3e883413e4050"} Jan 29 06:49:15 crc kubenswrapper[5017]: I0129 06:49:15.064225 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" event={"ID":"a5489a52-a692-4be5-ad55-f4e3607180e9","Type":"ContainerStarted","Data":"e6b7729e0313f1f675529beef515c100ff0dffc2565a4312f84cef1c35444aba"} Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.653261 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-phhs4"] Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.655867 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.669412 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhs4"] Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.805214 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-utilities\") pod \"redhat-marketplace-phhs4\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.805338 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tlm4\" (UniqueName: \"kubernetes.io/projected/1b797768-e39e-4e59-a657-e1c5af163fa0-kube-api-access-9tlm4\") pod \"redhat-marketplace-phhs4\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.805365 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-catalog-content\") pod \"redhat-marketplace-phhs4\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.907406 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tlm4\" (UniqueName: \"kubernetes.io/projected/1b797768-e39e-4e59-a657-e1c5af163fa0-kube-api-access-9tlm4\") pod \"redhat-marketplace-phhs4\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.907482 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-catalog-content\") pod \"redhat-marketplace-phhs4\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.907516 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-utilities\") pod \"redhat-marketplace-phhs4\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.908386 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-catalog-content\") pod \"redhat-marketplace-phhs4\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.908532 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-utilities\") pod \"redhat-marketplace-phhs4\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.931573 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tlm4\" (UniqueName: \"kubernetes.io/projected/1b797768-e39e-4e59-a657-e1c5af163fa0-kube-api-access-9tlm4\") pod \"redhat-marketplace-phhs4\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:16 crc kubenswrapper[5017]: I0129 06:49:16.979852 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:17 crc kubenswrapper[5017]: I0129 06:49:17.454072 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhs4"] Jan 29 06:49:18 crc kubenswrapper[5017]: I0129 06:49:18.084127 5017 generic.go:334] "Generic (PLEG): container finished" podID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerID="bc512ac4473dd86f61cb01c5002c75fdd4f4569a9ac444f9ab13d2e9ca431541" exitCode=0 Jan 29 06:49:18 crc kubenswrapper[5017]: I0129 06:49:18.084192 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhs4" event={"ID":"1b797768-e39e-4e59-a657-e1c5af163fa0","Type":"ContainerDied","Data":"bc512ac4473dd86f61cb01c5002c75fdd4f4569a9ac444f9ab13d2e9ca431541"} Jan 29 06:49:18 crc kubenswrapper[5017]: I0129 06:49:18.084233 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhs4" event={"ID":"1b797768-e39e-4e59-a657-e1c5af163fa0","Type":"ContainerStarted","Data":"625d7557dc9b405196248771671f07452c2061781e1eed38d64a0141af030e34"} Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.104029 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" event={"ID":"a5489a52-a692-4be5-ad55-f4e3607180e9","Type":"ContainerStarted","Data":"d937e8696b8127295a26152242edbd25b4c2ef88a9c3320a3995c14b5c10032f"} Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.106267 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" event={"ID":"a50c399e-cf7e-4906-82b7-44e8925508c1","Type":"ContainerStarted","Data":"eae5623da6d4807bddc45d39b701a2583ea0f905e43cdea5bccc9d015d06429e"} Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.106545 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.130359 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-kt86q" podStartSLOduration=1.026005399 podStartE2EDuration="6.130335641s" podCreationTimestamp="2026-01-29 06:49:14 +0000 UTC" firstStartedPulling="2026-01-29 06:49:14.659298706 +0000 UTC m=+841.033746316" lastFinishedPulling="2026-01-29 06:49:19.763628948 +0000 UTC m=+846.138076558" observedRunningTime="2026-01-29 06:49:20.124722892 +0000 UTC m=+846.499170512" watchObservedRunningTime="2026-01-29 06:49:20.130335641 +0000 UTC m=+846.504783261" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.137227 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-kbdd8"] Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.138159 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-kbdd8" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.140709 5017 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-d97p7" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.149270 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" podStartSLOduration=1.972058664 podStartE2EDuration="7.14924708s" podCreationTimestamp="2026-01-29 06:49:13 +0000 UTC" firstStartedPulling="2026-01-29 06:49:14.60412043 +0000 UTC m=+840.978568040" lastFinishedPulling="2026-01-29 06:49:19.781308846 +0000 UTC m=+846.155756456" observedRunningTime="2026-01-29 06:49:20.148633215 +0000 UTC m=+846.523080835" watchObservedRunningTime="2026-01-29 06:49:20.14924708 +0000 UTC m=+846.523694690" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.163176 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-kbdd8"] Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.264926 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27b9p\" (UniqueName: \"kubernetes.io/projected/d340ec3a-8018-4c17-864a-4121ef63d989-kube-api-access-27b9p\") pod \"cert-manager-545d4d4674-kbdd8\" (UID: \"d340ec3a-8018-4c17-864a-4121ef63d989\") " pod="cert-manager/cert-manager-545d4d4674-kbdd8" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.265112 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d340ec3a-8018-4c17-864a-4121ef63d989-bound-sa-token\") pod \"cert-manager-545d4d4674-kbdd8\" (UID: \"d340ec3a-8018-4c17-864a-4121ef63d989\") " pod="cert-manager/cert-manager-545d4d4674-kbdd8" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.366788 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27b9p\" (UniqueName: \"kubernetes.io/projected/d340ec3a-8018-4c17-864a-4121ef63d989-kube-api-access-27b9p\") pod \"cert-manager-545d4d4674-kbdd8\" (UID: \"d340ec3a-8018-4c17-864a-4121ef63d989\") " pod="cert-manager/cert-manager-545d4d4674-kbdd8" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.366894 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d340ec3a-8018-4c17-864a-4121ef63d989-bound-sa-token\") pod \"cert-manager-545d4d4674-kbdd8\" (UID: \"d340ec3a-8018-4c17-864a-4121ef63d989\") " pod="cert-manager/cert-manager-545d4d4674-kbdd8" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.398454 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27b9p\" (UniqueName: \"kubernetes.io/projected/d340ec3a-8018-4c17-864a-4121ef63d989-kube-api-access-27b9p\") pod \"cert-manager-545d4d4674-kbdd8\" (UID: \"d340ec3a-8018-4c17-864a-4121ef63d989\") " pod="cert-manager/cert-manager-545d4d4674-kbdd8" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.401815 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d340ec3a-8018-4c17-864a-4121ef63d989-bound-sa-token\") pod \"cert-manager-545d4d4674-kbdd8\" (UID: \"d340ec3a-8018-4c17-864a-4121ef63d989\") " pod="cert-manager/cert-manager-545d4d4674-kbdd8" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.454911 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-kbdd8" Jan 29 06:49:20 crc kubenswrapper[5017]: I0129 06:49:20.822729 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-kbdd8"] Jan 29 06:49:21 crc kubenswrapper[5017]: I0129 06:49:21.116834 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-kbdd8" event={"ID":"d340ec3a-8018-4c17-864a-4121ef63d989","Type":"ContainerStarted","Data":"594a02dd4173b4123cb880870dfc197fca513513331c6756453691565fd43b21"} Jan 29 06:49:21 crc kubenswrapper[5017]: I0129 06:49:21.117415 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-kbdd8" event={"ID":"d340ec3a-8018-4c17-864a-4121ef63d989","Type":"ContainerStarted","Data":"c3974e90142d9162f378d7638f373d8643621abe3a0b4116b05a9503c0c7cb25"} Jan 29 06:49:21 crc kubenswrapper[5017]: I0129 06:49:21.119282 5017 generic.go:334] "Generic (PLEG): container finished" podID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerID="b6af8493bf98767b9c019a3d726e6ac5a29e3b1d10db0841c2641fc5a2f41abc" exitCode=0 Jan 29 06:49:21 crc kubenswrapper[5017]: I0129 06:49:21.119368 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhs4" event={"ID":"1b797768-e39e-4e59-a657-e1c5af163fa0","Type":"ContainerDied","Data":"b6af8493bf98767b9c019a3d726e6ac5a29e3b1d10db0841c2641fc5a2f41abc"} Jan 29 06:49:21 crc kubenswrapper[5017]: I0129 06:49:21.143479 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-kbdd8" podStartSLOduration=1.143446734 podStartE2EDuration="1.143446734s" podCreationTimestamp="2026-01-29 06:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:49:21.141810474 +0000 UTC m=+847.516258104" watchObservedRunningTime="2026-01-29 06:49:21.143446734 +0000 UTC m=+847.517894384" Jan 29 06:49:22 crc kubenswrapper[5017]: I0129 06:49:22.130588 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhs4" event={"ID":"1b797768-e39e-4e59-a657-e1c5af163fa0","Type":"ContainerStarted","Data":"c615638a3bf789acc4eadc84a1ea460270dc7d1614864d44c0b9feff25a416f8"} Jan 29 06:49:22 crc kubenswrapper[5017]: I0129 06:49:22.180347 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-phhs4" podStartSLOduration=4.229216933 podStartE2EDuration="6.180307906s" podCreationTimestamp="2026-01-29 06:49:16 +0000 UTC" firstStartedPulling="2026-01-29 06:49:19.671635296 +0000 UTC m=+846.046082916" lastFinishedPulling="2026-01-29 06:49:21.622726279 +0000 UTC m=+847.997173889" observedRunningTime="2026-01-29 06:49:22.166519044 +0000 UTC m=+848.540966664" watchObservedRunningTime="2026-01-29 06:49:22.180307906 +0000 UTC m=+848.554755516" Jan 29 06:49:24 crc kubenswrapper[5017]: I0129 06:49:24.080494 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-ttkcb" Jan 29 06:49:26 crc kubenswrapper[5017]: I0129 06:49:26.981395 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:26 crc kubenswrapper[5017]: I0129 06:49:26.982090 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.051909 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.223416 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.296129 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-59s2l"] Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.296991 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-59s2l" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.299381 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-92sgw" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.299458 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.310510 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-59s2l"] Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.310724 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.472098 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2h78\" (UniqueName: \"kubernetes.io/projected/5dd472bb-8962-446f-a584-b8ef0c607201-kube-api-access-n2h78\") pod \"openstack-operator-index-59s2l\" (UID: \"5dd472bb-8962-446f-a584-b8ef0c607201\") " pod="openstack-operators/openstack-operator-index-59s2l" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.573768 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2h78\" (UniqueName: \"kubernetes.io/projected/5dd472bb-8962-446f-a584-b8ef0c607201-kube-api-access-n2h78\") pod \"openstack-operator-index-59s2l\" (UID: \"5dd472bb-8962-446f-a584-b8ef0c607201\") " pod="openstack-operators/openstack-operator-index-59s2l" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.595644 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2h78\" (UniqueName: \"kubernetes.io/projected/5dd472bb-8962-446f-a584-b8ef0c607201-kube-api-access-n2h78\") pod \"openstack-operator-index-59s2l\" (UID: \"5dd472bb-8962-446f-a584-b8ef0c607201\") " pod="openstack-operators/openstack-operator-index-59s2l" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.614112 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-59s2l" Jan 29 06:49:27 crc kubenswrapper[5017]: I0129 06:49:27.841256 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-59s2l"] Jan 29 06:49:28 crc kubenswrapper[5017]: I0129 06:49:28.177519 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-59s2l" event={"ID":"5dd472bb-8962-446f-a584-b8ef0c607201","Type":"ContainerStarted","Data":"b733d16c65f4d20ea915d83cadd8246afb5a6fde019c386f8e740451b4347acb"} Jan 29 06:49:29 crc kubenswrapper[5017]: I0129 06:49:29.187078 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-59s2l" event={"ID":"5dd472bb-8962-446f-a584-b8ef0c607201","Type":"ContainerStarted","Data":"815ef596d2aea09fc35cb2c38478dc4a632809452fe5766b2a23ca79b4b4b2a7"} Jan 29 06:49:29 crc kubenswrapper[5017]: I0129 06:49:29.210306 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-59s2l" podStartSLOduration=1.397899329 podStartE2EDuration="2.210273005s" podCreationTimestamp="2026-01-29 06:49:27 +0000 UTC" firstStartedPulling="2026-01-29 06:49:27.857359015 +0000 UTC m=+854.231806625" lastFinishedPulling="2026-01-29 06:49:28.669732681 +0000 UTC m=+855.044180301" observedRunningTime="2026-01-29 06:49:29.208888801 +0000 UTC m=+855.583336451" watchObservedRunningTime="2026-01-29 06:49:29.210273005 +0000 UTC m=+855.584720655" Jan 29 06:49:31 crc kubenswrapper[5017]: I0129 06:49:31.894798 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhs4"] Jan 29 06:49:31 crc kubenswrapper[5017]: I0129 06:49:31.895550 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-phhs4" podUID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerName="registry-server" containerID="cri-o://c615638a3bf789acc4eadc84a1ea460270dc7d1614864d44c0b9feff25a416f8" gracePeriod=2 Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.226696 5017 generic.go:334] "Generic (PLEG): container finished" podID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerID="c615638a3bf789acc4eadc84a1ea460270dc7d1614864d44c0b9feff25a416f8" exitCode=0 Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.226772 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhs4" event={"ID":"1b797768-e39e-4e59-a657-e1c5af163fa0","Type":"ContainerDied","Data":"c615638a3bf789acc4eadc84a1ea460270dc7d1614864d44c0b9feff25a416f8"} Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.360239 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.489707 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-utilities\") pod \"1b797768-e39e-4e59-a657-e1c5af163fa0\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.489775 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-catalog-content\") pod \"1b797768-e39e-4e59-a657-e1c5af163fa0\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.489915 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tlm4\" (UniqueName: \"kubernetes.io/projected/1b797768-e39e-4e59-a657-e1c5af163fa0-kube-api-access-9tlm4\") pod \"1b797768-e39e-4e59-a657-e1c5af163fa0\" (UID: \"1b797768-e39e-4e59-a657-e1c5af163fa0\") " Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.495594 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-utilities" (OuterVolumeSpecName: "utilities") pod "1b797768-e39e-4e59-a657-e1c5af163fa0" (UID: "1b797768-e39e-4e59-a657-e1c5af163fa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.502547 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b797768-e39e-4e59-a657-e1c5af163fa0-kube-api-access-9tlm4" (OuterVolumeSpecName: "kube-api-access-9tlm4") pod "1b797768-e39e-4e59-a657-e1c5af163fa0" (UID: "1b797768-e39e-4e59-a657-e1c5af163fa0"). InnerVolumeSpecName "kube-api-access-9tlm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.523258 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b797768-e39e-4e59-a657-e1c5af163fa0" (UID: "1b797768-e39e-4e59-a657-e1c5af163fa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.592338 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.592394 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b797768-e39e-4e59-a657-e1c5af163fa0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:32 crc kubenswrapper[5017]: I0129 06:49:32.592414 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tlm4\" (UniqueName: \"kubernetes.io/projected/1b797768-e39e-4e59-a657-e1c5af163fa0-kube-api-access-9tlm4\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.094896 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-59s2l"] Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.095182 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-59s2l" podUID="5dd472bb-8962-446f-a584-b8ef0c607201" containerName="registry-server" containerID="cri-o://815ef596d2aea09fc35cb2c38478dc4a632809452fe5766b2a23ca79b4b4b2a7" gracePeriod=2 Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.239687 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phhs4" event={"ID":"1b797768-e39e-4e59-a657-e1c5af163fa0","Type":"ContainerDied","Data":"625d7557dc9b405196248771671f07452c2061781e1eed38d64a0141af030e34"} Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.239704 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phhs4" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.239793 5017 scope.go:117] "RemoveContainer" containerID="c615638a3bf789acc4eadc84a1ea460270dc7d1614864d44c0b9feff25a416f8" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.242745 5017 generic.go:334] "Generic (PLEG): container finished" podID="5dd472bb-8962-446f-a584-b8ef0c607201" containerID="815ef596d2aea09fc35cb2c38478dc4a632809452fe5766b2a23ca79b4b4b2a7" exitCode=0 Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.242812 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-59s2l" event={"ID":"5dd472bb-8962-446f-a584-b8ef0c607201","Type":"ContainerDied","Data":"815ef596d2aea09fc35cb2c38478dc4a632809452fe5766b2a23ca79b4b4b2a7"} Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.280478 5017 scope.go:117] "RemoveContainer" containerID="b6af8493bf98767b9c019a3d726e6ac5a29e3b1d10db0841c2641fc5a2f41abc" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.286304 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhs4"] Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.293715 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-phhs4"] Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.306678 5017 scope.go:117] "RemoveContainer" containerID="bc512ac4473dd86f61cb01c5002c75fdd4f4569a9ac444f9ab13d2e9ca431541" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.510177 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-59s2l" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.708808 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2h78\" (UniqueName: \"kubernetes.io/projected/5dd472bb-8962-446f-a584-b8ef0c607201-kube-api-access-n2h78\") pod \"5dd472bb-8962-446f-a584-b8ef0c607201\" (UID: \"5dd472bb-8962-446f-a584-b8ef0c607201\") " Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.713154 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd472bb-8962-446f-a584-b8ef0c607201-kube-api-access-n2h78" (OuterVolumeSpecName: "kube-api-access-n2h78") pod "5dd472bb-8962-446f-a584-b8ef0c607201" (UID: "5dd472bb-8962-446f-a584-b8ef0c607201"). InnerVolumeSpecName "kube-api-access-n2h78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.810849 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2h78\" (UniqueName: \"kubernetes.io/projected/5dd472bb-8962-446f-a584-b8ef0c607201-kube-api-access-n2h78\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.896691 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wwps8"] Jan 29 06:49:33 crc kubenswrapper[5017]: E0129 06:49:33.897021 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerName="extract-utilities" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.897041 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerName="extract-utilities" Jan 29 06:49:33 crc kubenswrapper[5017]: E0129 06:49:33.897054 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerName="registry-server" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.897063 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerName="registry-server" Jan 29 06:49:33 crc kubenswrapper[5017]: E0129 06:49:33.897074 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerName="extract-content" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.897083 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerName="extract-content" Jan 29 06:49:33 crc kubenswrapper[5017]: E0129 06:49:33.897115 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd472bb-8962-446f-a584-b8ef0c607201" containerName="registry-server" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.897125 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd472bb-8962-446f-a584-b8ef0c607201" containerName="registry-server" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.897288 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b797768-e39e-4e59-a657-e1c5af163fa0" containerName="registry-server" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.897303 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd472bb-8962-446f-a584-b8ef0c607201" containerName="registry-server" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.897775 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wwps8" Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.909616 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wwps8"] Jan 29 06:49:33 crc kubenswrapper[5017]: I0129 06:49:33.912389 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbzd4\" (UniqueName: \"kubernetes.io/projected/dd0dee81-c421-43b8-8137-b56ad147be6a-kube-api-access-fbzd4\") pod \"openstack-operator-index-wwps8\" (UID: \"dd0dee81-c421-43b8-8137-b56ad147be6a\") " pod="openstack-operators/openstack-operator-index-wwps8" Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.013420 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbzd4\" (UniqueName: \"kubernetes.io/projected/dd0dee81-c421-43b8-8137-b56ad147be6a-kube-api-access-fbzd4\") pod \"openstack-operator-index-wwps8\" (UID: \"dd0dee81-c421-43b8-8137-b56ad147be6a\") " pod="openstack-operators/openstack-operator-index-wwps8" Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.033811 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbzd4\" (UniqueName: \"kubernetes.io/projected/dd0dee81-c421-43b8-8137-b56ad147be6a-kube-api-access-fbzd4\") pod \"openstack-operator-index-wwps8\" (UID: \"dd0dee81-c421-43b8-8137-b56ad147be6a\") " pod="openstack-operators/openstack-operator-index-wwps8" Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.215598 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wwps8" Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.254675 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-59s2l" event={"ID":"5dd472bb-8962-446f-a584-b8ef0c607201","Type":"ContainerDied","Data":"b733d16c65f4d20ea915d83cadd8246afb5a6fde019c386f8e740451b4347acb"} Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.254738 5017 scope.go:117] "RemoveContainer" containerID="815ef596d2aea09fc35cb2c38478dc4a632809452fe5766b2a23ca79b4b4b2a7" Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.254774 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-59s2l" Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.302111 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-59s2l"] Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.336606 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b797768-e39e-4e59-a657-e1c5af163fa0" path="/var/lib/kubelet/pods/1b797768-e39e-4e59-a657-e1c5af163fa0/volumes" Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.337928 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-59s2l"] Jan 29 06:49:34 crc kubenswrapper[5017]: I0129 06:49:34.715033 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wwps8"] Jan 29 06:49:34 crc kubenswrapper[5017]: W0129 06:49:34.725282 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd0dee81_c421_43b8_8137_b56ad147be6a.slice/crio-724f838de0d941a145b259b5de2cd1940375ea3959dafd2e13ae81073e93484f WatchSource:0}: Error finding container 724f838de0d941a145b259b5de2cd1940375ea3959dafd2e13ae81073e93484f: Status 404 returned error can't find the container with id 724f838de0d941a145b259b5de2cd1940375ea3959dafd2e13ae81073e93484f Jan 29 06:49:35 crc kubenswrapper[5017]: I0129 06:49:35.264081 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wwps8" event={"ID":"dd0dee81-c421-43b8-8137-b56ad147be6a","Type":"ContainerStarted","Data":"724f838de0d941a145b259b5de2cd1940375ea3959dafd2e13ae81073e93484f"} Jan 29 06:49:36 crc kubenswrapper[5017]: I0129 06:49:36.275335 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wwps8" event={"ID":"dd0dee81-c421-43b8-8137-b56ad147be6a","Type":"ContainerStarted","Data":"00bc863c5bb62e52017490c6deb96775078573f847c8ae3af5827cf3ed09fee4"} Jan 29 06:49:36 crc kubenswrapper[5017]: I0129 06:49:36.293202 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wwps8" podStartSLOduration=2.875705892 podStartE2EDuration="3.293177665s" podCreationTimestamp="2026-01-29 06:49:33 +0000 UTC" firstStartedPulling="2026-01-29 06:49:34.731996181 +0000 UTC m=+861.106443791" lastFinishedPulling="2026-01-29 06:49:35.149467944 +0000 UTC m=+861.523915564" observedRunningTime="2026-01-29 06:49:36.293087003 +0000 UTC m=+862.667534613" watchObservedRunningTime="2026-01-29 06:49:36.293177665 +0000 UTC m=+862.667625285" Jan 29 06:49:36 crc kubenswrapper[5017]: I0129 06:49:36.324030 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd472bb-8962-446f-a584-b8ef0c607201" path="/var/lib/kubelet/pods/5dd472bb-8962-446f-a584-b8ef0c607201/volumes" Jan 29 06:49:43 crc kubenswrapper[5017]: I0129 06:49:43.913343 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2z2bt"] Jan 29 06:49:43 crc kubenswrapper[5017]: I0129 06:49:43.914949 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:43 crc kubenswrapper[5017]: I0129 06:49:43.919872 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2z2bt"] Jan 29 06:49:43 crc kubenswrapper[5017]: I0129 06:49:43.978501 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-utilities\") pod \"community-operators-2z2bt\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:43 crc kubenswrapper[5017]: I0129 06:49:43.978613 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m495q\" (UniqueName: \"kubernetes.io/projected/051b1044-83b6-48d0-87cd-cdbcb483176e-kube-api-access-m495q\") pod \"community-operators-2z2bt\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:43 crc kubenswrapper[5017]: I0129 06:49:43.978749 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-catalog-content\") pod \"community-operators-2z2bt\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.080161 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-catalog-content\") pod \"community-operators-2z2bt\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.080324 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-utilities\") pod \"community-operators-2z2bt\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.080373 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m495q\" (UniqueName: \"kubernetes.io/projected/051b1044-83b6-48d0-87cd-cdbcb483176e-kube-api-access-m495q\") pod \"community-operators-2z2bt\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.080732 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-catalog-content\") pod \"community-operators-2z2bt\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.080817 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-utilities\") pod \"community-operators-2z2bt\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.101033 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m495q\" (UniqueName: \"kubernetes.io/projected/051b1044-83b6-48d0-87cd-cdbcb483176e-kube-api-access-m495q\") pod \"community-operators-2z2bt\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.215817 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wwps8" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.216182 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wwps8" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.241147 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.259054 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wwps8" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.428622 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wwps8" Jan 29 06:49:44 crc kubenswrapper[5017]: I0129 06:49:44.803534 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2z2bt"] Jan 29 06:49:44 crc kubenswrapper[5017]: W0129 06:49:44.805632 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod051b1044_83b6_48d0_87cd_cdbcb483176e.slice/crio-16610928e22b3d39078e64c02edfbf361bb898b574e9d852baac57beacffe8d4 WatchSource:0}: Error finding container 16610928e22b3d39078e64c02edfbf361bb898b574e9d852baac57beacffe8d4: Status 404 returned error can't find the container with id 16610928e22b3d39078e64c02edfbf361bb898b574e9d852baac57beacffe8d4 Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.341671 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6"] Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.343082 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.345371 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-lf6jd" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.354110 5017 generic.go:334] "Generic (PLEG): container finished" podID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerID="436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9" exitCode=0 Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.355409 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2bt" event={"ID":"051b1044-83b6-48d0-87cd-cdbcb483176e","Type":"ContainerDied","Data":"436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9"} Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.355445 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2bt" event={"ID":"051b1044-83b6-48d0-87cd-cdbcb483176e","Type":"ContainerStarted","Data":"16610928e22b3d39078e64c02edfbf361bb898b574e9d852baac57beacffe8d4"} Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.374517 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6"] Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.402043 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-bundle\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.402128 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bvqg\" (UniqueName: \"kubernetes.io/projected/3c0fb80a-a7e7-4978-8840-4307bc2529e3-kube-api-access-6bvqg\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.402271 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-util\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.503883 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-util\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.503984 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-bundle\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.504027 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bvqg\" (UniqueName: \"kubernetes.io/projected/3c0fb80a-a7e7-4978-8840-4307bc2529e3-kube-api-access-6bvqg\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.504496 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-util\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.504915 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-bundle\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.527585 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bvqg\" (UniqueName: \"kubernetes.io/projected/3c0fb80a-a7e7-4978-8840-4307bc2529e3-kube-api-access-6bvqg\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:45 crc kubenswrapper[5017]: I0129 06:49:45.666107 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:46 crc kubenswrapper[5017]: I0129 06:49:46.144335 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6"] Jan 29 06:49:46 crc kubenswrapper[5017]: I0129 06:49:46.363023 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" event={"ID":"3c0fb80a-a7e7-4978-8840-4307bc2529e3","Type":"ContainerStarted","Data":"83cebed55a3df0dff76861bc5d726cf11b1c513b105e195215b422b0dadc001f"} Jan 29 06:49:46 crc kubenswrapper[5017]: I0129 06:49:46.363087 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" event={"ID":"3c0fb80a-a7e7-4978-8840-4307bc2529e3","Type":"ContainerStarted","Data":"3818ad971b28293ac9e4188343ab8b2dca0556e297895614b876d80d458ed2ad"} Jan 29 06:49:46 crc kubenswrapper[5017]: I0129 06:49:46.367735 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2bt" event={"ID":"051b1044-83b6-48d0-87cd-cdbcb483176e","Type":"ContainerDied","Data":"5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1"} Jan 29 06:49:46 crc kubenswrapper[5017]: I0129 06:49:46.367539 5017 generic.go:334] "Generic (PLEG): container finished" podID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerID="5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1" exitCode=0 Jan 29 06:49:47 crc kubenswrapper[5017]: I0129 06:49:47.388660 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerID="83cebed55a3df0dff76861bc5d726cf11b1c513b105e195215b422b0dadc001f" exitCode=0 Jan 29 06:49:47 crc kubenswrapper[5017]: I0129 06:49:47.389138 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" event={"ID":"3c0fb80a-a7e7-4978-8840-4307bc2529e3","Type":"ContainerDied","Data":"83cebed55a3df0dff76861bc5d726cf11b1c513b105e195215b422b0dadc001f"} Jan 29 06:49:47 crc kubenswrapper[5017]: I0129 06:49:47.398516 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2bt" event={"ID":"051b1044-83b6-48d0-87cd-cdbcb483176e","Type":"ContainerStarted","Data":"c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af"} Jan 29 06:49:47 crc kubenswrapper[5017]: I0129 06:49:47.441620 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2z2bt" podStartSLOduration=2.879903197 podStartE2EDuration="4.441597334s" podCreationTimestamp="2026-01-29 06:49:43 +0000 UTC" firstStartedPulling="2026-01-29 06:49:45.362782983 +0000 UTC m=+871.737230593" lastFinishedPulling="2026-01-29 06:49:46.92447712 +0000 UTC m=+873.298924730" observedRunningTime="2026-01-29 06:49:47.435342618 +0000 UTC m=+873.809790228" watchObservedRunningTime="2026-01-29 06:49:47.441597334 +0000 UTC m=+873.816044944" Jan 29 06:49:49 crc kubenswrapper[5017]: I0129 06:49:49.421065 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerID="d202354d68be9043a976515826f2f5e319c0db38c14625253a9a736c960896cd" exitCode=0 Jan 29 06:49:49 crc kubenswrapper[5017]: I0129 06:49:49.421218 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" event={"ID":"3c0fb80a-a7e7-4978-8840-4307bc2529e3","Type":"ContainerDied","Data":"d202354d68be9043a976515826f2f5e319c0db38c14625253a9a736c960896cd"} Jan 29 06:49:50 crc kubenswrapper[5017]: I0129 06:49:50.434182 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerID="42e9da05354be842f9310567b091b993385a565bf4efccf8f01fb03bcb44f559" exitCode=0 Jan 29 06:49:50 crc kubenswrapper[5017]: I0129 06:49:50.434291 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" event={"ID":"3c0fb80a-a7e7-4978-8840-4307bc2529e3","Type":"ContainerDied","Data":"42e9da05354be842f9310567b091b993385a565bf4efccf8f01fb03bcb44f559"} Jan 29 06:49:51 crc kubenswrapper[5017]: I0129 06:49:51.735630 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:51 crc kubenswrapper[5017]: I0129 06:49:51.817753 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-util\") pod \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " Jan 29 06:49:51 crc kubenswrapper[5017]: I0129 06:49:51.817849 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bvqg\" (UniqueName: \"kubernetes.io/projected/3c0fb80a-a7e7-4978-8840-4307bc2529e3-kube-api-access-6bvqg\") pod \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " Jan 29 06:49:51 crc kubenswrapper[5017]: I0129 06:49:51.817873 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-bundle\") pod \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\" (UID: \"3c0fb80a-a7e7-4978-8840-4307bc2529e3\") " Jan 29 06:49:51 crc kubenswrapper[5017]: I0129 06:49:51.819573 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-bundle" (OuterVolumeSpecName: "bundle") pod "3c0fb80a-a7e7-4978-8840-4307bc2529e3" (UID: "3c0fb80a-a7e7-4978-8840-4307bc2529e3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:51 crc kubenswrapper[5017]: I0129 06:49:51.825913 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0fb80a-a7e7-4978-8840-4307bc2529e3-kube-api-access-6bvqg" (OuterVolumeSpecName: "kube-api-access-6bvqg") pod "3c0fb80a-a7e7-4978-8840-4307bc2529e3" (UID: "3c0fb80a-a7e7-4978-8840-4307bc2529e3"). InnerVolumeSpecName "kube-api-access-6bvqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:51 crc kubenswrapper[5017]: I0129 06:49:51.919321 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bvqg\" (UniqueName: \"kubernetes.io/projected/3c0fb80a-a7e7-4978-8840-4307bc2529e3-kube-api-access-6bvqg\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:51 crc kubenswrapper[5017]: I0129 06:49:51.919362 5017 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:52 crc kubenswrapper[5017]: I0129 06:49:52.454222 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" event={"ID":"3c0fb80a-a7e7-4978-8840-4307bc2529e3","Type":"ContainerDied","Data":"3818ad971b28293ac9e4188343ab8b2dca0556e297895614b876d80d458ed2ad"} Jan 29 06:49:52 crc kubenswrapper[5017]: I0129 06:49:52.454280 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3818ad971b28293ac9e4188343ab8b2dca0556e297895614b876d80d458ed2ad" Jan 29 06:49:52 crc kubenswrapper[5017]: I0129 06:49:52.454302 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6" Jan 29 06:49:52 crc kubenswrapper[5017]: I0129 06:49:52.525765 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-util" (OuterVolumeSpecName: "util") pod "3c0fb80a-a7e7-4978-8840-4307bc2529e3" (UID: "3c0fb80a-a7e7-4978-8840-4307bc2529e3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:52 crc kubenswrapper[5017]: I0129 06:49:52.528219 5017 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c0fb80a-a7e7-4978-8840-4307bc2529e3-util\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:54 crc kubenswrapper[5017]: I0129 06:49:54.241448 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:54 crc kubenswrapper[5017]: I0129 06:49:54.241788 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:54 crc kubenswrapper[5017]: I0129 06:49:54.280312 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:54 crc kubenswrapper[5017]: I0129 06:49:54.510665 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.438996 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5"] Jan 29 06:49:55 crc kubenswrapper[5017]: E0129 06:49:55.439326 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerName="extract" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.439342 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerName="extract" Jan 29 06:49:55 crc kubenswrapper[5017]: E0129 06:49:55.439361 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerName="util" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.439369 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerName="util" Jan 29 06:49:55 crc kubenswrapper[5017]: E0129 06:49:55.439380 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerName="pull" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.439386 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerName="pull" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.439503 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0fb80a-a7e7-4978-8840-4307bc2529e3" containerName="extract" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.439971 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.443945 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-kg4lk" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.507730 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mph6j\" (UniqueName: \"kubernetes.io/projected/42582d12-6d4b-43cc-b843-7c425d6dbdf3-kube-api-access-mph6j\") pod \"openstack-operator-controller-init-5c4cd4c8c8-qggc5\" (UID: \"42582d12-6d4b-43cc-b843-7c425d6dbdf3\") " pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.545120 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5"] Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.609370 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mph6j\" (UniqueName: \"kubernetes.io/projected/42582d12-6d4b-43cc-b843-7c425d6dbdf3-kube-api-access-mph6j\") pod \"openstack-operator-controller-init-5c4cd4c8c8-qggc5\" (UID: \"42582d12-6d4b-43cc-b843-7c425d6dbdf3\") " pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.628875 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mph6j\" (UniqueName: \"kubernetes.io/projected/42582d12-6d4b-43cc-b843-7c425d6dbdf3-kube-api-access-mph6j\") pod \"openstack-operator-controller-init-5c4cd4c8c8-qggc5\" (UID: \"42582d12-6d4b-43cc-b843-7c425d6dbdf3\") " pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" Jan 29 06:49:55 crc kubenswrapper[5017]: I0129 06:49:55.757106 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" Jan 29 06:49:56 crc kubenswrapper[5017]: I0129 06:49:56.351267 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5"] Jan 29 06:49:56 crc kubenswrapper[5017]: I0129 06:49:56.481701 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" event={"ID":"42582d12-6d4b-43cc-b843-7c425d6dbdf3","Type":"ContainerStarted","Data":"372d45bd27cc73016ad0d506bbd688d253808eba74c02f5968ec4866b65e13b4"} Jan 29 06:49:56 crc kubenswrapper[5017]: I0129 06:49:56.693033 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2z2bt"] Jan 29 06:49:56 crc kubenswrapper[5017]: I0129 06:49:56.693328 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2z2bt" podUID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerName="registry-server" containerID="cri-o://c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af" gracePeriod=2 Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.127791 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.234864 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-utilities\") pod \"051b1044-83b6-48d0-87cd-cdbcb483176e\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.235364 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-catalog-content\") pod \"051b1044-83b6-48d0-87cd-cdbcb483176e\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.235394 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m495q\" (UniqueName: \"kubernetes.io/projected/051b1044-83b6-48d0-87cd-cdbcb483176e-kube-api-access-m495q\") pod \"051b1044-83b6-48d0-87cd-cdbcb483176e\" (UID: \"051b1044-83b6-48d0-87cd-cdbcb483176e\") " Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.235936 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-utilities" (OuterVolumeSpecName: "utilities") pod "051b1044-83b6-48d0-87cd-cdbcb483176e" (UID: "051b1044-83b6-48d0-87cd-cdbcb483176e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.242476 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051b1044-83b6-48d0-87cd-cdbcb483176e-kube-api-access-m495q" (OuterVolumeSpecName: "kube-api-access-m495q") pod "051b1044-83b6-48d0-87cd-cdbcb483176e" (UID: "051b1044-83b6-48d0-87cd-cdbcb483176e"). InnerVolumeSpecName "kube-api-access-m495q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.294114 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "051b1044-83b6-48d0-87cd-cdbcb483176e" (UID: "051b1044-83b6-48d0-87cd-cdbcb483176e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.337388 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.337428 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051b1044-83b6-48d0-87cd-cdbcb483176e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.337439 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m495q\" (UniqueName: \"kubernetes.io/projected/051b1044-83b6-48d0-87cd-cdbcb483176e-kube-api-access-m495q\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.493214 5017 generic.go:334] "Generic (PLEG): container finished" podID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerID="c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af" exitCode=0 Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.493287 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2bt" event={"ID":"051b1044-83b6-48d0-87cd-cdbcb483176e","Type":"ContainerDied","Data":"c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af"} Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.493333 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z2bt" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.493352 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z2bt" event={"ID":"051b1044-83b6-48d0-87cd-cdbcb483176e","Type":"ContainerDied","Data":"16610928e22b3d39078e64c02edfbf361bb898b574e9d852baac57beacffe8d4"} Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.493369 5017 scope.go:117] "RemoveContainer" containerID="c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.522230 5017 scope.go:117] "RemoveContainer" containerID="5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.524665 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2z2bt"] Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.531453 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2z2bt"] Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.550548 5017 scope.go:117] "RemoveContainer" containerID="436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.576118 5017 scope.go:117] "RemoveContainer" containerID="c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af" Jan 29 06:49:57 crc kubenswrapper[5017]: E0129 06:49:57.576509 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af\": container with ID starting with c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af not found: ID does not exist" containerID="c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.576540 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af"} err="failed to get container status \"c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af\": rpc error: code = NotFound desc = could not find container \"c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af\": container with ID starting with c4141620eb52e01ae03c4af2885c223199ff7c3b888e0113387939df99c623af not found: ID does not exist" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.576564 5017 scope.go:117] "RemoveContainer" containerID="5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1" Jan 29 06:49:57 crc kubenswrapper[5017]: E0129 06:49:57.576744 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1\": container with ID starting with 5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1 not found: ID does not exist" containerID="5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.576769 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1"} err="failed to get container status \"5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1\": rpc error: code = NotFound desc = could not find container \"5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1\": container with ID starting with 5e10aefa58d728dadf0f7a2ac0c00bff7950681bc34750e6dc5b45c7400bc1e1 not found: ID does not exist" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.576783 5017 scope.go:117] "RemoveContainer" containerID="436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9" Jan 29 06:49:57 crc kubenswrapper[5017]: E0129 06:49:57.577069 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9\": container with ID starting with 436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9 not found: ID does not exist" containerID="436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9" Jan 29 06:49:57 crc kubenswrapper[5017]: I0129 06:49:57.577094 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9"} err="failed to get container status \"436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9\": rpc error: code = NotFound desc = could not find container \"436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9\": container with ID starting with 436287f590b5f4eb8059b14df99b850388d672691a17b30d2b7902e21bc519e9 not found: ID does not exist" Jan 29 06:49:58 crc kubenswrapper[5017]: I0129 06:49:58.332560 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051b1044-83b6-48d0-87cd-cdbcb483176e" path="/var/lib/kubelet/pods/051b1044-83b6-48d0-87cd-cdbcb483176e/volumes" Jan 29 06:50:02 crc kubenswrapper[5017]: I0129 06:50:02.535186 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" event={"ID":"42582d12-6d4b-43cc-b843-7c425d6dbdf3","Type":"ContainerStarted","Data":"cfc974fa8a6e12925e93c93be4b0c0df5f34c10c35e3cbf248a2d30335a138ec"} Jan 29 06:50:02 crc kubenswrapper[5017]: I0129 06:50:02.537537 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" Jan 29 06:50:02 crc kubenswrapper[5017]: I0129 06:50:02.566825 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" podStartSLOduration=2.499660903 podStartE2EDuration="7.566803317s" podCreationTimestamp="2026-01-29 06:49:55 +0000 UTC" firstStartedPulling="2026-01-29 06:49:56.361031767 +0000 UTC m=+882.735479377" lastFinishedPulling="2026-01-29 06:50:01.428174181 +0000 UTC m=+887.802621791" observedRunningTime="2026-01-29 06:50:02.566720565 +0000 UTC m=+888.941168175" watchObservedRunningTime="2026-01-29 06:50:02.566803317 +0000 UTC m=+888.941250927" Jan 29 06:50:15 crc kubenswrapper[5017]: I0129 06:50:15.762062 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-qggc5" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.382764 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4vwv"] Jan 29 06:50:22 crc kubenswrapper[5017]: E0129 06:50:22.384319 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerName="registry-server" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.384346 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerName="registry-server" Jan 29 06:50:22 crc kubenswrapper[5017]: E0129 06:50:22.384392 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerName="extract-content" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.384404 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerName="extract-content" Jan 29 06:50:22 crc kubenswrapper[5017]: E0129 06:50:22.384424 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerName="extract-utilities" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.384436 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerName="extract-utilities" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.384672 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="051b1044-83b6-48d0-87cd-cdbcb483176e" containerName="registry-server" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.388288 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.401040 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4vwv"] Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.474767 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nhmw\" (UniqueName: \"kubernetes.io/projected/87b058e1-dc0b-4bcc-9ddf-3830cda03980-kube-api-access-8nhmw\") pod \"certified-operators-j4vwv\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.474883 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-utilities\") pod \"certified-operators-j4vwv\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.475021 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-catalog-content\") pod \"certified-operators-j4vwv\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.575881 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-utilities\") pod \"certified-operators-j4vwv\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.576005 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-catalog-content\") pod \"certified-operators-j4vwv\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.576063 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nhmw\" (UniqueName: \"kubernetes.io/projected/87b058e1-dc0b-4bcc-9ddf-3830cda03980-kube-api-access-8nhmw\") pod \"certified-operators-j4vwv\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.577080 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-utilities\") pod \"certified-operators-j4vwv\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.577471 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-catalog-content\") pod \"certified-operators-j4vwv\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.606178 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nhmw\" (UniqueName: \"kubernetes.io/projected/87b058e1-dc0b-4bcc-9ddf-3830cda03980-kube-api-access-8nhmw\") pod \"certified-operators-j4vwv\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:22 crc kubenswrapper[5017]: I0129 06:50:22.723941 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:23 crc kubenswrapper[5017]: I0129 06:50:23.386667 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4vwv"] Jan 29 06:50:23 crc kubenswrapper[5017]: I0129 06:50:23.729814 5017 generic.go:334] "Generic (PLEG): container finished" podID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerID="14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e" exitCode=0 Jan 29 06:50:23 crc kubenswrapper[5017]: I0129 06:50:23.729872 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vwv" event={"ID":"87b058e1-dc0b-4bcc-9ddf-3830cda03980","Type":"ContainerDied","Data":"14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e"} Jan 29 06:50:23 crc kubenswrapper[5017]: I0129 06:50:23.729910 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vwv" event={"ID":"87b058e1-dc0b-4bcc-9ddf-3830cda03980","Type":"ContainerStarted","Data":"3ac18b6ff1a9181cbb65e534b4d8a8a39e61962c8b239c92617609017ca2d80d"} Jan 29 06:50:24 crc kubenswrapper[5017]: I0129 06:50:24.738752 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vwv" event={"ID":"87b058e1-dc0b-4bcc-9ddf-3830cda03980","Type":"ContainerStarted","Data":"c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e"} Jan 29 06:50:25 crc kubenswrapper[5017]: I0129 06:50:25.746736 5017 generic.go:334] "Generic (PLEG): container finished" podID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerID="c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e" exitCode=0 Jan 29 06:50:25 crc kubenswrapper[5017]: I0129 06:50:25.746825 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vwv" event={"ID":"87b058e1-dc0b-4bcc-9ddf-3830cda03980","Type":"ContainerDied","Data":"c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e"} Jan 29 06:50:26 crc kubenswrapper[5017]: I0129 06:50:26.539760 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:50:26 crc kubenswrapper[5017]: I0129 06:50:26.540201 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:50:27 crc kubenswrapper[5017]: I0129 06:50:27.764510 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vwv" event={"ID":"87b058e1-dc0b-4bcc-9ddf-3830cda03980","Type":"ContainerStarted","Data":"7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654"} Jan 29 06:50:27 crc kubenswrapper[5017]: I0129 06:50:27.790666 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4vwv" podStartSLOduration=3.365811494 podStartE2EDuration="5.790642895s" podCreationTimestamp="2026-01-29 06:50:22 +0000 UTC" firstStartedPulling="2026-01-29 06:50:23.732527822 +0000 UTC m=+910.106975432" lastFinishedPulling="2026-01-29 06:50:26.157359203 +0000 UTC m=+912.531806833" observedRunningTime="2026-01-29 06:50:27.786242316 +0000 UTC m=+914.160689956" watchObservedRunningTime="2026-01-29 06:50:27.790642895 +0000 UTC m=+914.165090515" Jan 29 06:50:32 crc kubenswrapper[5017]: I0129 06:50:32.724552 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:32 crc kubenswrapper[5017]: I0129 06:50:32.724918 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:32 crc kubenswrapper[5017]: I0129 06:50:32.771182 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:32 crc kubenswrapper[5017]: I0129 06:50:32.898064 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:33 crc kubenswrapper[5017]: I0129 06:50:33.005246 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4vwv"] Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.826675 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j4vwv" podUID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerName="registry-server" containerID="cri-o://7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654" gracePeriod=2 Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.919888 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc"] Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.921121 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.923639 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-5ncgm" Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.924877 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv"] Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.930185 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.931718 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-r42kl" Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.949716 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv"] Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.950740 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.992000 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fc4cn" Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.992846 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv"] Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.993342 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4nr2\" (UniqueName: \"kubernetes.io/projected/0cf68843-4944-46e5-940e-03273a49fd0a-kube-api-access-t4nr2\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-m9htc\" (UID: \"0cf68843-4944-46e5-940e-03273a49fd0a\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" Jan 29 06:50:34 crc kubenswrapper[5017]: I0129 06:50:34.993475 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gs7\" (UniqueName: \"kubernetes.io/projected/326882c7-bd9e-4141-95c3-e21dadfd560d-kube-api-access-m4gs7\") pod \"cinder-operator-controller-manager-8d874c8fc-h4xwv\" (UID: \"326882c7-bd9e-4141-95c3-e21dadfd560d\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.064352 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.094864 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4nr2\" (UniqueName: \"kubernetes.io/projected/0cf68843-4944-46e5-940e-03273a49fd0a-kube-api-access-t4nr2\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-m9htc\" (UID: \"0cf68843-4944-46e5-940e-03273a49fd0a\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.094942 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gs7\" (UniqueName: \"kubernetes.io/projected/326882c7-bd9e-4141-95c3-e21dadfd560d-kube-api-access-m4gs7\") pod \"cinder-operator-controller-manager-8d874c8fc-h4xwv\" (UID: \"326882c7-bd9e-4141-95c3-e21dadfd560d\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.095023 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srwfs\" (UniqueName: \"kubernetes.io/projected/bda8f50d-d263-450b-922d-9e9da95811b3-kube-api-access-srwfs\") pod \"designate-operator-controller-manager-6d9697b7f4-4vthv\" (UID: \"bda8f50d-d263-450b-922d-9e9da95811b3\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.099915 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.102039 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.107823 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hd8pj" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.127476 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.140870 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.152153 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4nr2\" (UniqueName: \"kubernetes.io/projected/0cf68843-4944-46e5-940e-03273a49fd0a-kube-api-access-t4nr2\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-m9htc\" (UID: \"0cf68843-4944-46e5-940e-03273a49fd0a\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.158794 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gs7\" (UniqueName: \"kubernetes.io/projected/326882c7-bd9e-4141-95c3-e21dadfd560d-kube-api-access-m4gs7\") pod \"cinder-operator-controller-manager-8d874c8fc-h4xwv\" (UID: \"326882c7-bd9e-4141-95c3-e21dadfd560d\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.183358 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.184343 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.197837 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfxk\" (UniqueName: \"kubernetes.io/projected/2aa64d1e-6f8d-4c60-a26b-12ae9595051b-kube-api-access-6tfxk\") pod \"glance-operator-controller-manager-8886f4c47-7whnz\" (UID: \"2aa64d1e-6f8d-4c60-a26b-12ae9595051b\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.197972 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srwfs\" (UniqueName: \"kubernetes.io/projected/bda8f50d-d263-450b-922d-9e9da95811b3-kube-api-access-srwfs\") pod \"designate-operator-controller-manager-6d9697b7f4-4vthv\" (UID: \"bda8f50d-d263-450b-922d-9e9da95811b3\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.201803 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.203091 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.208367 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-672x2" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.208681 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ssrlp" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.232015 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.238131 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.239191 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srwfs\" (UniqueName: \"kubernetes.io/projected/bda8f50d-d263-450b-922d-9e9da95811b3-kube-api-access-srwfs\") pod \"designate-operator-controller-manager-6d9697b7f4-4vthv\" (UID: \"bda8f50d-d263-450b-922d-9e9da95811b3\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.248026 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-67c59"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.249184 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.257730 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5wpxw" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.257939 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.273850 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-67c59"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.300766 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.305305 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.306719 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.306990 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.307617 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.308989 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk7rq\" (UniqueName: \"kubernetes.io/projected/6b1e3dc5-6234-4b08-a023-459b6ef45d8a-kube-api-access-nk7rq\") pod \"heat-operator-controller-manager-69d6db494d-7h92b\" (UID: \"6b1e3dc5-6234-4b08-a023-459b6ef45d8a\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.309060 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tfxk\" (UniqueName: \"kubernetes.io/projected/2aa64d1e-6f8d-4c60-a26b-12ae9595051b-kube-api-access-6tfxk\") pod \"glance-operator-controller-manager-8886f4c47-7whnz\" (UID: \"2aa64d1e-6f8d-4c60-a26b-12ae9595051b\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.309101 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjcq8\" (UniqueName: \"kubernetes.io/projected/a348ad8b-f3a0-4639-9839-2bb062e77e29-kube-api-access-fjcq8\") pod \"horizon-operator-controller-manager-5fb775575f-7zkj8\" (UID: \"a348ad8b-f3a0-4639-9839-2bb062e77e29\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.309135 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxd45\" (UniqueName: \"kubernetes.io/projected/0c9c357e-634d-49c9-84bc-642deb32fa88-kube-api-access-vxd45\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.318188 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-bxtjm" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.318415 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6b2f4" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.335022 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.337240 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.353901 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.363016 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.363497 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tfxk\" (UniqueName: \"kubernetes.io/projected/2aa64d1e-6f8d-4c60-a26b-12ae9595051b-kube-api-access-6tfxk\") pod \"glance-operator-controller-manager-8886f4c47-7whnz\" (UID: \"2aa64d1e-6f8d-4c60-a26b-12ae9595051b\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.363665 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.403536 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.404668 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.408260 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gr6jv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.414849 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.415563 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk7rq\" (UniqueName: \"kubernetes.io/projected/6b1e3dc5-6234-4b08-a023-459b6ef45d8a-kube-api-access-nk7rq\") pod \"heat-operator-controller-manager-69d6db494d-7h92b\" (UID: \"6b1e3dc5-6234-4b08-a023-459b6ef45d8a\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.424191 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9zhk\" (UniqueName: \"kubernetes.io/projected/f4577d7f-77c1-41dc-a6dc-37a8f967edd5-kube-api-access-s9zhk\") pod \"ironic-operator-controller-manager-5f4b8bd54d-whkgr\" (UID: \"f4577d7f-77c1-41dc-a6dc-37a8f967edd5\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.424315 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjcq8\" (UniqueName: \"kubernetes.io/projected/a348ad8b-f3a0-4639-9839-2bb062e77e29-kube-api-access-fjcq8\") pod \"horizon-operator-controller-manager-5fb775575f-7zkj8\" (UID: \"a348ad8b-f3a0-4639-9839-2bb062e77e29\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.424385 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfhd\" (UniqueName: \"kubernetes.io/projected/4d8182ea-62eb-455e-b34c-e5028514c4e1-kube-api-access-4nfhd\") pod \"keystone-operator-controller-manager-84f48565d4-bxzj9\" (UID: \"4d8182ea-62eb-455e-b34c-e5028514c4e1\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.424472 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxd45\" (UniqueName: \"kubernetes.io/projected/0c9c357e-634d-49c9-84bc-642deb32fa88-kube-api-access-vxd45\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:35 crc kubenswrapper[5017]: E0129 06:50:35.415208 5017 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:35 crc kubenswrapper[5017]: E0129 06:50:35.425100 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert podName:0c9c357e-634d-49c9-84bc-642deb32fa88 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:35.925058593 +0000 UTC m=+922.299506203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert") pod "infra-operator-controller-manager-79955696d6-67c59" (UID: "0c9c357e-634d-49c9-84bc-642deb32fa88") : secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.429869 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.430723 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.462711 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk7rq\" (UniqueName: \"kubernetes.io/projected/6b1e3dc5-6234-4b08-a023-459b6ef45d8a-kube-api-access-nk7rq\") pod \"heat-operator-controller-manager-69d6db494d-7h92b\" (UID: \"6b1e3dc5-6234-4b08-a023-459b6ef45d8a\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.463869 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxd45\" (UniqueName: \"kubernetes.io/projected/0c9c357e-634d-49c9-84bc-642deb32fa88-kube-api-access-vxd45\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.473266 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjcq8\" (UniqueName: \"kubernetes.io/projected/a348ad8b-f3a0-4639-9839-2bb062e77e29-kube-api-access-fjcq8\") pod \"horizon-operator-controller-manager-5fb775575f-7zkj8\" (UID: \"a348ad8b-f3a0-4639-9839-2bb062e77e29\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.482937 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.484298 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.498156 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tw9pv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.501424 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.522014 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.527554 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h658x\" (UniqueName: \"kubernetes.io/projected/f8aa8837-37c8-4461-bd3c-e2aae6e5dfab-kube-api-access-h658x\") pod \"manila-operator-controller-manager-7dd968899f-5ckck\" (UID: \"f8aa8837-37c8-4461-bd3c-e2aae6e5dfab\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.527607 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9zhk\" (UniqueName: \"kubernetes.io/projected/f4577d7f-77c1-41dc-a6dc-37a8f967edd5-kube-api-access-s9zhk\") pod \"ironic-operator-controller-manager-5f4b8bd54d-whkgr\" (UID: \"f4577d7f-77c1-41dc-a6dc-37a8f967edd5\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.527634 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nfhd\" (UniqueName: \"kubernetes.io/projected/4d8182ea-62eb-455e-b34c-e5028514c4e1-kube-api-access-4nfhd\") pod \"keystone-operator-controller-manager-84f48565d4-bxzj9\" (UID: \"4d8182ea-62eb-455e-b34c-e5028514c4e1\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.538838 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.551701 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9zhk\" (UniqueName: \"kubernetes.io/projected/f4577d7f-77c1-41dc-a6dc-37a8f967edd5-kube-api-access-s9zhk\") pod \"ironic-operator-controller-manager-5f4b8bd54d-whkgr\" (UID: \"f4577d7f-77c1-41dc-a6dc-37a8f967edd5\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.571023 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.571575 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nfhd\" (UniqueName: \"kubernetes.io/projected/4d8182ea-62eb-455e-b34c-e5028514c4e1-kube-api-access-4nfhd\") pod \"keystone-operator-controller-manager-84f48565d4-bxzj9\" (UID: \"4d8182ea-62eb-455e-b34c-e5028514c4e1\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.571935 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.582725 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-c4c92" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.604282 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.631001 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqh4\" (UniqueName: \"kubernetes.io/projected/b51d682b-635c-44de-8d9e-945127aaeb63-kube-api-access-jhqh4\") pod \"neutron-operator-controller-manager-585dbc889-sd7m9\" (UID: \"b51d682b-635c-44de-8d9e-945127aaeb63\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.631058 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h658x\" (UniqueName: \"kubernetes.io/projected/f8aa8837-37c8-4461-bd3c-e2aae6e5dfab-kube-api-access-h658x\") pod \"manila-operator-controller-manager-7dd968899f-5ckck\" (UID: \"f8aa8837-37c8-4461-bd3c-e2aae6e5dfab\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.631103 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg69s\" (UniqueName: \"kubernetes.io/projected/5aa1136e-d199-49c3-9bc3-5cbdaa19d552-kube-api-access-jg69s\") pod \"mariadb-operator-controller-manager-67bf948998-hlh7p\" (UID: \"5aa1136e-d199-49c3-9bc3-5cbdaa19d552\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.658883 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.661095 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.662555 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h658x\" (UniqueName: \"kubernetes.io/projected/f8aa8837-37c8-4461-bd3c-e2aae6e5dfab-kube-api-access-h658x\") pod \"manila-operator-controller-manager-7dd968899f-5ckck\" (UID: \"f8aa8837-37c8-4461-bd3c-e2aae6e5dfab\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.698571 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj"] Jan 29 06:50:35 crc kubenswrapper[5017]: E0129 06:50:35.699826 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerName="registry-server" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.699849 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerName="registry-server" Jan 29 06:50:35 crc kubenswrapper[5017]: E0129 06:50:35.699869 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerName="extract-content" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.699879 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerName="extract-content" Jan 29 06:50:35 crc kubenswrapper[5017]: E0129 06:50:35.699902 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerName="extract-utilities" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.699911 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerName="extract-utilities" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.700107 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerName="registry-server" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.703754 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.712386 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n4tz9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.718649 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.734474 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-catalog-content\") pod \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.734600 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-utilities\") pod \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.734652 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nhmw\" (UniqueName: \"kubernetes.io/projected/87b058e1-dc0b-4bcc-9ddf-3830cda03980-kube-api-access-8nhmw\") pod \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\" (UID: \"87b058e1-dc0b-4bcc-9ddf-3830cda03980\") " Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.735152 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhqh4\" (UniqueName: \"kubernetes.io/projected/b51d682b-635c-44de-8d9e-945127aaeb63-kube-api-access-jhqh4\") pod \"neutron-operator-controller-manager-585dbc889-sd7m9\" (UID: \"b51d682b-635c-44de-8d9e-945127aaeb63\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.735225 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg69s\" (UniqueName: \"kubernetes.io/projected/5aa1136e-d199-49c3-9bc3-5cbdaa19d552-kube-api-access-jg69s\") pod \"mariadb-operator-controller-manager-67bf948998-hlh7p\" (UID: \"5aa1136e-d199-49c3-9bc3-5cbdaa19d552\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.741891 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-utilities" (OuterVolumeSpecName: "utilities") pod "87b058e1-dc0b-4bcc-9ddf-3830cda03980" (UID: "87b058e1-dc0b-4bcc-9ddf-3830cda03980"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.744826 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.745876 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.750863 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-g28df" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.751644 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.756714 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.759618 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg69s\" (UniqueName: \"kubernetes.io/projected/5aa1136e-d199-49c3-9bc3-5cbdaa19d552-kube-api-access-jg69s\") pod \"mariadb-operator-controller-manager-67bf948998-hlh7p\" (UID: \"5aa1136e-d199-49c3-9bc3-5cbdaa19d552\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.763626 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.765419 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.765494 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.770243 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b058e1-dc0b-4bcc-9ddf-3830cda03980-kube-api-access-8nhmw" (OuterVolumeSpecName: "kube-api-access-8nhmw") pod "87b058e1-dc0b-4bcc-9ddf-3830cda03980" (UID: "87b058e1-dc0b-4bcc-9ddf-3830cda03980"). InnerVolumeSpecName "kube-api-access-8nhmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.770471 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-v6jdc" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.770506 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.789660 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhqh4\" (UniqueName: \"kubernetes.io/projected/b51d682b-635c-44de-8d9e-945127aaeb63-kube-api-access-jhqh4\") pod \"neutron-operator-controller-manager-585dbc889-sd7m9\" (UID: \"b51d682b-635c-44de-8d9e-945127aaeb63\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.832972 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.861099 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj54b\" (UniqueName: \"kubernetes.io/projected/a77928ad-eb54-45fc-a53e-b3f22cb62d53-kube-api-access-pj54b\") pod \"nova-operator-controller-manager-55bff696bd-l6gbj\" (UID: \"a77928ad-eb54-45fc-a53e-b3f22cb62d53\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.861222 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.861252 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtft\" (UniqueName: \"kubernetes.io/projected/7dd82efb-017d-4e70-86b1-f25e7026646a-kube-api-access-jqtft\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.861361 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrr9k\" (UniqueName: \"kubernetes.io/projected/863f4dee-1272-4cb9-8ced-84a5114d64af-kube-api-access-nrr9k\") pod \"octavia-operator-controller-manager-6687f8d877-bpdqt\" (UID: \"863f4dee-1272-4cb9-8ced-84a5114d64af\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.861525 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.861548 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nhmw\" (UniqueName: \"kubernetes.io/projected/87b058e1-dc0b-4bcc-9ddf-3830cda03980-kube-api-access-8nhmw\") on node \"crc\" DevicePath \"\"" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.863736 5017 generic.go:334] "Generic (PLEG): container finished" podID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" containerID="7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654" exitCode=0 Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.863834 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vwv" event={"ID":"87b058e1-dc0b-4bcc-9ddf-3830cda03980","Type":"ContainerDied","Data":"7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654"} Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.863869 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4vwv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.863898 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4vwv" event={"ID":"87b058e1-dc0b-4bcc-9ddf-3830cda03980","Type":"ContainerDied","Data":"3ac18b6ff1a9181cbb65e534b4d8a8a39e61962c8b239c92617609017ca2d80d"} Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.863937 5017 scope.go:117] "RemoveContainer" containerID="7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.867762 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.899402 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.901229 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.903552 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-55rgm" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.906214 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.907315 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.909386 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lgvrk" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.922813 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.925834 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.930232 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.933267 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.933578 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87b058e1-dc0b-4bcc-9ddf-3830cda03980" (UID: "87b058e1-dc0b-4bcc-9ddf-3830cda03980"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.939253 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.942223 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.944232 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.945241 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m5qwl" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.946466 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kf4jp" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.954146 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.964799 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.964852 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtft\" (UniqueName: \"kubernetes.io/projected/7dd82efb-017d-4e70-86b1-f25e7026646a-kube-api-access-jqtft\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.964908 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.964933 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrr9k\" (UniqueName: \"kubernetes.io/projected/863f4dee-1272-4cb9-8ced-84a5114d64af-kube-api-access-nrr9k\") pod \"octavia-operator-controller-manager-6687f8d877-bpdqt\" (UID: \"863f4dee-1272-4cb9-8ced-84a5114d64af\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" Jan 29 06:50:35 crc kubenswrapper[5017]: E0129 06:50:35.965584 5017 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:35 crc kubenswrapper[5017]: E0129 06:50:35.965636 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert podName:7dd82efb-017d-4e70-86b1-f25e7026646a nodeName:}" failed. No retries permitted until 2026-01-29 06:50:36.465619808 +0000 UTC m=+922.840067418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" (UID: "7dd82efb-017d-4e70-86b1-f25e7026646a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:35 crc kubenswrapper[5017]: E0129 06:50:35.965946 5017 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:35 crc kubenswrapper[5017]: E0129 06:50:35.966137 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert podName:0c9c357e-634d-49c9-84bc-642deb32fa88 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:36.966127571 +0000 UTC m=+923.340575181 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert") pod "infra-operator-controller-manager-79955696d6-67c59" (UID: "0c9c357e-634d-49c9-84bc-642deb32fa88") : secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.966190 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj54b\" (UniqueName: \"kubernetes.io/projected/a77928ad-eb54-45fc-a53e-b3f22cb62d53-kube-api-access-pj54b\") pod \"nova-operator-controller-manager-55bff696bd-l6gbj\" (UID: \"a77928ad-eb54-45fc-a53e-b3f22cb62d53\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.966231 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b058e1-dc0b-4bcc-9ddf-3830cda03980-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.966463 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.974209 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx"] Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.974319 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.977077 5017 scope.go:117] "RemoveContainer" containerID="c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.983687 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lzvll" Jan 29 06:50:35 crc kubenswrapper[5017]: I0129 06:50:35.999665 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.011045 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtft\" (UniqueName: \"kubernetes.io/projected/7dd82efb-017d-4e70-86b1-f25e7026646a-kube-api-access-jqtft\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.013091 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrr9k\" (UniqueName: \"kubernetes.io/projected/863f4dee-1272-4cb9-8ced-84a5114d64af-kube-api-access-nrr9k\") pod \"octavia-operator-controller-manager-6687f8d877-bpdqt\" (UID: \"863f4dee-1272-4cb9-8ced-84a5114d64af\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.016712 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj54b\" (UniqueName: \"kubernetes.io/projected/a77928ad-eb54-45fc-a53e-b3f22cb62d53-kube-api-access-pj54b\") pod \"nova-operator-controller-manager-55bff696bd-l6gbj\" (UID: \"a77928ad-eb54-45fc-a53e-b3f22cb62d53\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.030078 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wtmjj"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.031136 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.034933 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qw9kl" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.043570 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wtmjj"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.068883 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8dp7\" (UniqueName: \"kubernetes.io/projected/d7bc466f-b955-4c7a-a5dc-806e4a89b432-kube-api-access-g8dp7\") pod \"test-operator-controller-manager-56f8bfcd9f-f7d98\" (UID: \"d7bc466f-b955-4c7a-a5dc-806e4a89b432\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.068998 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vs8\" (UniqueName: \"kubernetes.io/projected/ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c-kube-api-access-v7vs8\") pod \"telemetry-operator-controller-manager-64b5b76f97-lrhwt\" (UID: \"ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.069050 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwzxj\" (UniqueName: \"kubernetes.io/projected/5558b938-90cc-4177-ae13-4c8d6f65ea6d-kube-api-access-rwzxj\") pod \"ovn-operator-controller-manager-788c46999f-kjrxb\" (UID: \"5558b938-90cc-4177-ae13-4c8d6f65ea6d\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.069074 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcwj\" (UniqueName: \"kubernetes.io/projected/b9a03454-a7c9-47c6-9eda-6cf83e3140d7-kube-api-access-dtcwj\") pod \"swift-operator-controller-manager-68fc8c869-l6mcx\" (UID: \"b9a03454-a7c9-47c6-9eda-6cf83e3140d7\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.069129 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584rv\" (UniqueName: \"kubernetes.io/projected/283799a2-6b66-4255-8864-3a561dd04e89-kube-api-access-584rv\") pod \"placement-operator-controller-manager-5b964cf4cd-szcqv\" (UID: \"283799a2-6b66-4255-8864-3a561dd04e89\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.075928 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.076994 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.079304 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-58v9j" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.079560 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.079689 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.090215 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.145193 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.146297 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.158883 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-ps2r8" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.162318 5017 scope.go:117] "RemoveContainer" containerID="14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.169294 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.170478 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.170543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vs8\" (UniqueName: \"kubernetes.io/projected/ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c-kube-api-access-v7vs8\") pod \"telemetry-operator-controller-manager-64b5b76f97-lrhwt\" (UID: \"ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.170574 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwzxj\" (UniqueName: \"kubernetes.io/projected/5558b938-90cc-4177-ae13-4c8d6f65ea6d-kube-api-access-rwzxj\") pod \"ovn-operator-controller-manager-788c46999f-kjrxb\" (UID: \"5558b938-90cc-4177-ae13-4c8d6f65ea6d\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.170599 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcwj\" (UniqueName: \"kubernetes.io/projected/b9a03454-a7c9-47c6-9eda-6cf83e3140d7-kube-api-access-dtcwj\") pod \"swift-operator-controller-manager-68fc8c869-l6mcx\" (UID: \"b9a03454-a7c9-47c6-9eda-6cf83e3140d7\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.178706 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584rv\" (UniqueName: \"kubernetes.io/projected/283799a2-6b66-4255-8864-3a561dd04e89-kube-api-access-584rv\") pod \"placement-operator-controller-manager-5b964cf4cd-szcqv\" (UID: \"283799a2-6b66-4255-8864-3a561dd04e89\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.178815 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6h9l\" (UniqueName: \"kubernetes.io/projected/594ce113-eeb0-4eb4-9254-4f1695ced6c7-kube-api-access-n6h9l\") pod \"watcher-operator-controller-manager-564965969-wtmjj\" (UID: \"594ce113-eeb0-4eb4-9254-4f1695ced6c7\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.179125 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.179218 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8dp7\" (UniqueName: \"kubernetes.io/projected/d7bc466f-b955-4c7a-a5dc-806e4a89b432-kube-api-access-g8dp7\") pod \"test-operator-controller-manager-56f8bfcd9f-f7d98\" (UID: \"d7bc466f-b955-4c7a-a5dc-806e4a89b432\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.179585 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8br9d\" (UniqueName: \"kubernetes.io/projected/facf0821-eb7d-4510-bcb7-69387e467df9-kube-api-access-8br9d\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.186577 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.216974 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.220123 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vs8\" (UniqueName: \"kubernetes.io/projected/ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c-kube-api-access-v7vs8\") pod \"telemetry-operator-controller-manager-64b5b76f97-lrhwt\" (UID: \"ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.220163 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwzxj\" (UniqueName: \"kubernetes.io/projected/5558b938-90cc-4177-ae13-4c8d6f65ea6d-kube-api-access-rwzxj\") pod \"ovn-operator-controller-manager-788c46999f-kjrxb\" (UID: \"5558b938-90cc-4177-ae13-4c8d6f65ea6d\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.221085 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8dp7\" (UniqueName: \"kubernetes.io/projected/d7bc466f-b955-4c7a-a5dc-806e4a89b432-kube-api-access-g8dp7\") pod \"test-operator-controller-manager-56f8bfcd9f-f7d98\" (UID: \"d7bc466f-b955-4c7a-a5dc-806e4a89b432\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.221304 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcwj\" (UniqueName: \"kubernetes.io/projected/b9a03454-a7c9-47c6-9eda-6cf83e3140d7-kube-api-access-dtcwj\") pod \"swift-operator-controller-manager-68fc8c869-l6mcx\" (UID: \"b9a03454-a7c9-47c6-9eda-6cf83e3140d7\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.221521 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584rv\" (UniqueName: \"kubernetes.io/projected/283799a2-6b66-4255-8864-3a561dd04e89-kube-api-access-584rv\") pod \"placement-operator-controller-manager-5b964cf4cd-szcqv\" (UID: \"283799a2-6b66-4255-8864-3a561dd04e89\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.281380 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8br9d\" (UniqueName: \"kubernetes.io/projected/facf0821-eb7d-4510-bcb7-69387e467df9-kube-api-access-8br9d\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.281431 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.281527 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6h9l\" (UniqueName: \"kubernetes.io/projected/594ce113-eeb0-4eb4-9254-4f1695ced6c7-kube-api-access-n6h9l\") pod \"watcher-operator-controller-manager-564965969-wtmjj\" (UID: \"594ce113-eeb0-4eb4-9254-4f1695ced6c7\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.281579 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qq6f\" (UniqueName: \"kubernetes.io/projected/e8379d4d-67d5-42f0-8c28-f0d617723886-kube-api-access-5qq6f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r8p8w\" (UID: \"e8379d4d-67d5-42f0-8c28-f0d617723886\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.281601 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.281760 5017 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.281818 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:36.781801709 +0000 UTC m=+923.156249319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "webhook-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.282470 5017 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.282557 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:36.782529537 +0000 UTC m=+923.156977327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "metrics-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.291742 5017 scope.go:117] "RemoveContainer" containerID="7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654" Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.293383 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654\": container with ID starting with 7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654 not found: ID does not exist" containerID="7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.293430 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654"} err="failed to get container status \"7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654\": rpc error: code = NotFound desc = could not find container \"7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654\": container with ID starting with 7e4d74c1d8d667f5c25c308108d50da462ad583a4b176a304a7c416171b0e654 not found: ID does not exist" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.293460 5017 scope.go:117] "RemoveContainer" containerID="c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e" Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.293828 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e\": container with ID starting with c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e not found: ID does not exist" containerID="c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.293876 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e"} err="failed to get container status \"c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e\": rpc error: code = NotFound desc = could not find container \"c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e\": container with ID starting with c12904aff80a283d346cdbe439800afba420cd91dca7f104229d29f871bce16e not found: ID does not exist" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.293911 5017 scope.go:117] "RemoveContainer" containerID="14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e" Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.294302 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e\": container with ID starting with 14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e not found: ID does not exist" containerID="14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.294353 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e"} err="failed to get container status \"14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e\": rpc error: code = NotFound desc = could not find container \"14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e\": container with ID starting with 14d8d42d5657a1fc63b910e0f6c6b5c4fc8070211c8ea80c7ca1e6ff11a7ed5e not found: ID does not exist" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.308681 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.311378 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8br9d\" (UniqueName: \"kubernetes.io/projected/facf0821-eb7d-4510-bcb7-69387e467df9-kube-api-access-8br9d\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.311487 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.314980 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6h9l\" (UniqueName: \"kubernetes.io/projected/594ce113-eeb0-4eb4-9254-4f1695ced6c7-kube-api-access-n6h9l\") pod \"watcher-operator-controller-manager-564965969-wtmjj\" (UID: \"594ce113-eeb0-4eb4-9254-4f1695ced6c7\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.342134 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.362312 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.401226 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qq6f\" (UniqueName: \"kubernetes.io/projected/e8379d4d-67d5-42f0-8c28-f0d617723886-kube-api-access-5qq6f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r8p8w\" (UID: \"e8379d4d-67d5-42f0-8c28-f0d617723886\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.406803 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4vwv"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.406858 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j4vwv"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.425229 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qq6f\" (UniqueName: \"kubernetes.io/projected/e8379d4d-67d5-42f0-8c28-f0d617723886-kube-api-access-5qq6f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-r8p8w\" (UID: \"e8379d4d-67d5-42f0-8c28-f0d617723886\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.427218 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.428887 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.434662 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.477497 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.496415 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.504993 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.508685 5017 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.508752 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert podName:7dd82efb-017d-4e70-86b1-f25e7026646a nodeName:}" failed. No retries permitted until 2026-01-29 06:50:37.508735466 +0000 UTC m=+923.883183076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" (UID: "7dd82efb-017d-4e70-86b1-f25e7026646a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.513608 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.515798 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.589794 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.610458 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.635040 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b"] Jan 29 06:50:36 crc kubenswrapper[5017]: W0129 06:50:36.657076 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4577d7f_77c1_41dc_a6dc_37a8f967edd5.slice/crio-6427f86fde59b6efb1f136c23d2f9782297004f9938a8867a544118af2b45a99 WatchSource:0}: Error finding container 6427f86fde59b6efb1f136c23d2f9782297004f9938a8867a544118af2b45a99: Status 404 returned error can't find the container with id 6427f86fde59b6efb1f136c23d2f9782297004f9938a8867a544118af2b45a99 Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.820175 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.820245 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.820432 5017 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.820495 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:37.820479347 +0000 UTC m=+924.194926957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "metrics-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.822223 5017 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: E0129 06:50:36.822317 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:37.822294582 +0000 UTC m=+924.196742192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "webhook-server-cert" not found Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.826435 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.843011 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.850574 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9"] Jan 29 06:50:36 crc kubenswrapper[5017]: W0129 06:50:36.861220 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d8182ea_62eb_455e_b34c_e5028514c4e1.slice/crio-5a1e45b80cefdb2e46059d1aba6f61c08d63a0c510483e3ca9690967bfa67714 WatchSource:0}: Error finding container 5a1e45b80cefdb2e46059d1aba6f61c08d63a0c510483e3ca9690967bfa67714: Status 404 returned error can't find the container with id 5a1e45b80cefdb2e46059d1aba6f61c08d63a0c510483e3ca9690967bfa67714 Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.866113 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p"] Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.887876 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" event={"ID":"b51d682b-635c-44de-8d9e-945127aaeb63","Type":"ContainerStarted","Data":"91bb5d1694c05ab1ee2871d2de36453609650b7b8e675bdf1f8d6a9d78268ca1"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.894313 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" event={"ID":"2aa64d1e-6f8d-4c60-a26b-12ae9595051b","Type":"ContainerStarted","Data":"51b14d75fb044e91a464b95769c0b4e781d02e7a47db0568b461561196ce1de3"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.897663 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" event={"ID":"4d8182ea-62eb-455e-b34c-e5028514c4e1","Type":"ContainerStarted","Data":"5a1e45b80cefdb2e46059d1aba6f61c08d63a0c510483e3ca9690967bfa67714"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.898799 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" event={"ID":"326882c7-bd9e-4141-95c3-e21dadfd560d","Type":"ContainerStarted","Data":"64b79367ea08d2945fcd9c05ef6849caaa8aa6b6adee8635de73a2a1bd1b5e5d"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.900069 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" event={"ID":"6b1e3dc5-6234-4b08-a023-459b6ef45d8a","Type":"ContainerStarted","Data":"332ead0f48bf20daf5cd8f9309af1ca54094c5ccb2c6524a3b8601bce3c0ad3b"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.900921 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" event={"ID":"bda8f50d-d263-450b-922d-9e9da95811b3","Type":"ContainerStarted","Data":"7a54611bee00f98814ba8d28d9aec5f9ce83c731df86f135a498dfde4f23b6fb"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.901832 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" event={"ID":"5aa1136e-d199-49c3-9bc3-5cbdaa19d552","Type":"ContainerStarted","Data":"9158b56a8dc1912d703308831a7b9a434cc155c9535e4a523ab40b6b325a5b8d"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.903173 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" event={"ID":"0cf68843-4944-46e5-940e-03273a49fd0a","Type":"ContainerStarted","Data":"b2e03633d6df3aa2e72c3b5238f14e58db86f11b432eec99804d592a1e93c60c"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.904161 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" event={"ID":"a348ad8b-f3a0-4639-9839-2bb062e77e29","Type":"ContainerStarted","Data":"5631faf1cbef3650fb256ae86ce482024875e5f1e2f4526042e830cf64e3ea5c"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.905070 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" event={"ID":"f8aa8837-37c8-4461-bd3c-e2aae6e5dfab","Type":"ContainerStarted","Data":"faed6afa9db9c8441d884e21eca29982fe702613b5f2d60090cef1da7a2d2a37"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.906356 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" event={"ID":"f4577d7f-77c1-41dc-a6dc-37a8f967edd5","Type":"ContainerStarted","Data":"6427f86fde59b6efb1f136c23d2f9782297004f9938a8867a544118af2b45a99"} Jan 29 06:50:36 crc kubenswrapper[5017]: I0129 06:50:36.970199 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj"] Jan 29 06:50:37 crc kubenswrapper[5017]: W0129 06:50:37.018435 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3c279f_3bc1_4e6a_a0f5_cb46e55ede8c.slice/crio-1b19bc0d54cb262649274dba7032416f5efb74b06ef17c58d31323585623f4a4 WatchSource:0}: Error finding container 1b19bc0d54cb262649274dba7032416f5efb74b06ef17c58d31323585623f4a4: Status 404 returned error can't find the container with id 1b19bc0d54cb262649274dba7032416f5efb74b06ef17c58d31323585623f4a4 Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.020950 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt"] Jan 29 06:50:37 crc kubenswrapper[5017]: W0129 06:50:37.025190 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod863f4dee_1272_4cb9_8ced_84a5114d64af.slice/crio-8de651c54fe8e9b640442dbb0923582d45e38f62df3e6af1c8456e6a5a59ebd5 WatchSource:0}: Error finding container 8de651c54fe8e9b640442dbb0923582d45e38f62df3e6af1c8456e6a5a59ebd5: Status 404 returned error can't find the container with id 8de651c54fe8e9b640442dbb0923582d45e38f62df3e6af1c8456e6a5a59ebd5 Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.025503 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v7vs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-lrhwt_openstack-operators(ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.026462 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.026845 5017 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.026970 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert podName:0c9c357e-634d-49c9-84bc-642deb32fa88 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:39.026938206 +0000 UTC m=+925.401385816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert") pod "infra-operator-controller-manager-79955696d6-67c59" (UID: "0c9c357e-634d-49c9-84bc-642deb32fa88") : secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.027027 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" podUID="ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.043381 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nrr9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-bpdqt_openstack-operators(863f4dee-1272-4cb9-8ced-84a5114d64af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.043473 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt"] Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.046167 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" podUID="863f4dee-1272-4cb9-8ced-84a5114d64af" Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.145150 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv"] Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.152969 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98"] Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.160889 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx"] Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.167089 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wtmjj"] Jan 29 06:50:37 crc kubenswrapper[5017]: W0129 06:50:37.167603 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9a03454_a7c9_47c6_9eda_6cf83e3140d7.slice/crio-ee3c6d53d3f266673a3adab59f01fedc1885087470f5780cb8330b8694c687f8 WatchSource:0}: Error finding container ee3c6d53d3f266673a3adab59f01fedc1885087470f5780cb8330b8694c687f8: Status 404 returned error can't find the container with id ee3c6d53d3f266673a3adab59f01fedc1885087470f5780cb8330b8694c687f8 Jan 29 06:50:37 crc kubenswrapper[5017]: W0129 06:50:37.169395 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod594ce113_eeb0_4eb4_9254_4f1695ced6c7.slice/crio-7f43d55722f895affcee809037df96eaa94520dbb258845e1bd78fd8da2d7d57 WatchSource:0}: Error finding container 7f43d55722f895affcee809037df96eaa94520dbb258845e1bd78fd8da2d7d57: Status 404 returned error can't find the container with id 7f43d55722f895affcee809037df96eaa94520dbb258845e1bd78fd8da2d7d57 Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.171242 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dtcwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-l6mcx_openstack-operators(b9a03454-a7c9-47c6-9eda-6cf83e3140d7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:50:37 crc kubenswrapper[5017]: W0129 06:50:37.171390 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7bc466f_b955_4c7a_a5dc_806e4a89b432.slice/crio-17c86abf87c18a06f3575cf470a1b4f55084499013b1521b716cbfe2c82c16fd WatchSource:0}: Error finding container 17c86abf87c18a06f3575cf470a1b4f55084499013b1521b716cbfe2c82c16fd: Status 404 returned error can't find the container with id 17c86abf87c18a06f3575cf470a1b4f55084499013b1521b716cbfe2c82c16fd Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.174038 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" podUID="b9a03454-a7c9-47c6-9eda-6cf83e3140d7" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.176061 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g8dp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-f7d98_openstack-operators(d7bc466f-b955-4c7a-a5dc-806e4a89b432): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.176554 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n6h9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-wtmjj_openstack-operators(594ce113-eeb0-4eb4-9254-4f1695ced6c7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.178238 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" podUID="d7bc466f-b955-4c7a-a5dc-806e4a89b432" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.178215 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" podUID="594ce113-eeb0-4eb4-9254-4f1695ced6c7" Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.269041 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb"] Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.284967 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w"] Jan 29 06:50:37 crc kubenswrapper[5017]: W0129 06:50:37.297842 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8379d4d_67d5_42f0_8c28_f0d617723886.slice/crio-91ff60e052fccc978a0f167568ced69beaf62cecc37e232ed646c3659e23458b WatchSource:0}: Error finding container 91ff60e052fccc978a0f167568ced69beaf62cecc37e232ed646c3659e23458b: Status 404 returned error can't find the container with id 91ff60e052fccc978a0f167568ced69beaf62cecc37e232ed646c3659e23458b Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.302570 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5qq6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-r8p8w_openstack-operators(e8379d4d-67d5-42f0-8c28-f0d617723886): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.304722 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" podUID="e8379d4d-67d5-42f0-8c28-f0d617723886" Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.544803 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.545157 5017 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.545300 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert podName:7dd82efb-017d-4e70-86b1-f25e7026646a nodeName:}" failed. No retries permitted until 2026-01-29 06:50:39.54527004 +0000 UTC m=+925.919717650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" (UID: "7dd82efb-017d-4e70-86b1-f25e7026646a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.850303 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.850505 5017 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.850574 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.850632 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:39.850608871 +0000 UTC m=+926.225056481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "metrics-server-cert" not found Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.850777 5017 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.850860 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:39.850841018 +0000 UTC m=+926.225288628 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "webhook-server-cert" not found Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.924914 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" event={"ID":"e8379d4d-67d5-42f0-8c28-f0d617723886","Type":"ContainerStarted","Data":"91ff60e052fccc978a0f167568ced69beaf62cecc37e232ed646c3659e23458b"} Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.928664 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" event={"ID":"a77928ad-eb54-45fc-a53e-b3f22cb62d53","Type":"ContainerStarted","Data":"ddb47b2b53fd694da1772694c91059b85a30e3d83f389a22d42fbe1cfee98d00"} Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.932694 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" event={"ID":"5558b938-90cc-4177-ae13-4c8d6f65ea6d","Type":"ContainerStarted","Data":"ace0e1b38ffb18a8c1b183a788948532a91b35d8d1d7d7054ccc220c7f03e326"} Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.937380 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" event={"ID":"b9a03454-a7c9-47c6-9eda-6cf83e3140d7","Type":"ContainerStarted","Data":"ee3c6d53d3f266673a3adab59f01fedc1885087470f5780cb8330b8694c687f8"} Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.953560 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" event={"ID":"ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c","Type":"ContainerStarted","Data":"1b19bc0d54cb262649274dba7032416f5efb74b06ef17c58d31323585623f4a4"} Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.959039 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" podUID="ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.959134 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" podUID="e8379d4d-67d5-42f0-8c28-f0d617723886" Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.959303 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" podUID="b9a03454-a7c9-47c6-9eda-6cf83e3140d7" Jan 29 06:50:37 crc kubenswrapper[5017]: I0129 06:50:37.989479 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" event={"ID":"d7bc466f-b955-4c7a-a5dc-806e4a89b432","Type":"ContainerStarted","Data":"17c86abf87c18a06f3575cf470a1b4f55084499013b1521b716cbfe2c82c16fd"} Jan 29 06:50:37 crc kubenswrapper[5017]: E0129 06:50:37.995602 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" podUID="d7bc466f-b955-4c7a-a5dc-806e4a89b432" Jan 29 06:50:38 crc kubenswrapper[5017]: I0129 06:50:38.024411 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" event={"ID":"863f4dee-1272-4cb9-8ced-84a5114d64af","Type":"ContainerStarted","Data":"8de651c54fe8e9b640442dbb0923582d45e38f62df3e6af1c8456e6a5a59ebd5"} Jan 29 06:50:38 crc kubenswrapper[5017]: E0129 06:50:38.031323 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" podUID="863f4dee-1272-4cb9-8ced-84a5114d64af" Jan 29 06:50:38 crc kubenswrapper[5017]: I0129 06:50:38.049280 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" event={"ID":"594ce113-eeb0-4eb4-9254-4f1695ced6c7","Type":"ContainerStarted","Data":"7f43d55722f895affcee809037df96eaa94520dbb258845e1bd78fd8da2d7d57"} Jan 29 06:50:38 crc kubenswrapper[5017]: E0129 06:50:38.051785 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" podUID="594ce113-eeb0-4eb4-9254-4f1695ced6c7" Jan 29 06:50:38 crc kubenswrapper[5017]: I0129 06:50:38.055571 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" event={"ID":"283799a2-6b66-4255-8864-3a561dd04e89","Type":"ContainerStarted","Data":"e1590538f7efe78fa84180e22eb1201d3d4a17af749371fa2295b402fddf5ca2"} Jan 29 06:50:38 crc kubenswrapper[5017]: I0129 06:50:38.328321 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b058e1-dc0b-4bcc-9ddf-3830cda03980" path="/var/lib/kubelet/pods/87b058e1-dc0b-4bcc-9ddf-3830cda03980/volumes" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.067367 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" podUID="ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.068109 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" podUID="594ce113-eeb0-4eb4-9254-4f1695ced6c7" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.068791 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" podUID="d7bc466f-b955-4c7a-a5dc-806e4a89b432" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.068825 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" podUID="b9a03454-a7c9-47c6-9eda-6cf83e3140d7" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.068851 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" podUID="863f4dee-1272-4cb9-8ced-84a5114d64af" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.069841 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" podUID="e8379d4d-67d5-42f0-8c28-f0d617723886" Jan 29 06:50:39 crc kubenswrapper[5017]: I0129 06:50:39.081245 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.081446 5017 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.081490 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert podName:0c9c357e-634d-49c9-84bc-642deb32fa88 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:43.081476465 +0000 UTC m=+929.455924075 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert") pod "infra-operator-controller-manager-79955696d6-67c59" (UID: "0c9c357e-634d-49c9-84bc-642deb32fa88") : secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:39 crc kubenswrapper[5017]: I0129 06:50:39.593057 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.593243 5017 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.593312 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert podName:7dd82efb-017d-4e70-86b1-f25e7026646a nodeName:}" failed. No retries permitted until 2026-01-29 06:50:43.593295857 +0000 UTC m=+929.967743467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" (UID: "7dd82efb-017d-4e70-86b1-f25e7026646a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:39 crc kubenswrapper[5017]: I0129 06:50:39.897919 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.898156 5017 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 06:50:39 crc kubenswrapper[5017]: I0129 06:50:39.898184 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.898262 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:43.898235639 +0000 UTC m=+930.272683449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "metrics-server-cert" not found Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.898440 5017 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 06:50:39 crc kubenswrapper[5017]: E0129 06:50:39.898542 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:43.898517525 +0000 UTC m=+930.272965145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "webhook-server-cert" not found Jan 29 06:50:43 crc kubenswrapper[5017]: I0129 06:50:43.158705 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:43 crc kubenswrapper[5017]: E0129 06:50:43.158893 5017 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:43 crc kubenswrapper[5017]: E0129 06:50:43.159296 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert podName:0c9c357e-634d-49c9-84bc-642deb32fa88 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:51.159273895 +0000 UTC m=+937.533721505 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert") pod "infra-operator-controller-manager-79955696d6-67c59" (UID: "0c9c357e-634d-49c9-84bc-642deb32fa88") : secret "infra-operator-webhook-server-cert" not found Jan 29 06:50:43 crc kubenswrapper[5017]: I0129 06:50:43.666905 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:43 crc kubenswrapper[5017]: E0129 06:50:43.667178 5017 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:43 crc kubenswrapper[5017]: E0129 06:50:43.667458 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert podName:7dd82efb-017d-4e70-86b1-f25e7026646a nodeName:}" failed. No retries permitted until 2026-01-29 06:50:51.667438177 +0000 UTC m=+938.041885787 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" (UID: "7dd82efb-017d-4e70-86b1-f25e7026646a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:43 crc kubenswrapper[5017]: I0129 06:50:43.973674 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:43 crc kubenswrapper[5017]: I0129 06:50:43.973845 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:43 crc kubenswrapper[5017]: E0129 06:50:43.973883 5017 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 06:50:43 crc kubenswrapper[5017]: E0129 06:50:43.973989 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:51.973939728 +0000 UTC m=+938.348387338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "webhook-server-cert" not found Jan 29 06:50:43 crc kubenswrapper[5017]: E0129 06:50:43.974169 5017 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 06:50:43 crc kubenswrapper[5017]: E0129 06:50:43.974304 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:50:51.974276906 +0000 UTC m=+938.348724536 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "metrics-server-cert" not found Jan 29 06:50:49 crc kubenswrapper[5017]: E0129 06:50:49.306744 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 06:50:49 crc kubenswrapper[5017]: E0129 06:50:49.307620 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jg69s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-hlh7p_openstack-operators(5aa1136e-d199-49c3-9bc3-5cbdaa19d552): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:50:49 crc kubenswrapper[5017]: E0129 06:50:49.308752 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" podUID="5aa1136e-d199-49c3-9bc3-5cbdaa19d552" Jan 29 06:50:49 crc kubenswrapper[5017]: E0129 06:50:49.872809 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 29 06:50:49 crc kubenswrapper[5017]: E0129 06:50:49.873081 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s9zhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-whkgr_openstack-operators(f4577d7f-77c1-41dc-a6dc-37a8f967edd5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:50:49 crc kubenswrapper[5017]: E0129 06:50:49.874324 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" podUID="f4577d7f-77c1-41dc-a6dc-37a8f967edd5" Jan 29 06:50:50 crc kubenswrapper[5017]: E0129 06:50:50.150453 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" podUID="f4577d7f-77c1-41dc-a6dc-37a8f967edd5" Jan 29 06:50:50 crc kubenswrapper[5017]: E0129 06:50:50.150458 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" podUID="5aa1136e-d199-49c3-9bc3-5cbdaa19d552" Jan 29 06:50:50 crc kubenswrapper[5017]: E0129 06:50:50.555445 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 29 06:50:50 crc kubenswrapper[5017]: E0129 06:50:50.555668 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h658x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-5ckck_openstack-operators(f8aa8837-37c8-4461-bd3c-e2aae6e5dfab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:50:50 crc kubenswrapper[5017]: E0129 06:50:50.557556 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" podUID="f8aa8837-37c8-4461-bd3c-e2aae6e5dfab" Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.093606 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.093836 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nfhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-bxzj9_openstack-operators(4d8182ea-62eb-455e-b34c-e5028514c4e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.095029 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" podUID="4d8182ea-62eb-455e-b34c-e5028514c4e1" Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.156227 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" podUID="4d8182ea-62eb-455e-b34c-e5028514c4e1" Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.157235 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" podUID="f8aa8837-37c8-4461-bd3c-e2aae6e5dfab" Jan 29 06:50:51 crc kubenswrapper[5017]: I0129 06:50:51.213201 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:51 crc kubenswrapper[5017]: I0129 06:50:51.221696 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c9c357e-634d-49c9-84bc-642deb32fa88-cert\") pod \"infra-operator-controller-manager-79955696d6-67c59\" (UID: \"0c9c357e-634d-49c9-84bc-642deb32fa88\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:51 crc kubenswrapper[5017]: I0129 06:50:51.512147 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:50:51 crc kubenswrapper[5017]: I0129 06:50:51.721419 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.721800 5017 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.721872 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert podName:7dd82efb-017d-4e70-86b1-f25e7026646a nodeName:}" failed. No retries permitted until 2026-01-29 06:51:07.721851089 +0000 UTC m=+954.096298699 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" (UID: "7dd82efb-017d-4e70-86b1-f25e7026646a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.724345 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.724555 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-584rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-szcqv_openstack-operators(283799a2-6b66-4255-8864-3a561dd04e89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:50:51 crc kubenswrapper[5017]: E0129 06:50:51.725854 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" podUID="283799a2-6b66-4255-8864-3a561dd04e89" Jan 29 06:50:52 crc kubenswrapper[5017]: I0129 06:50:52.029199 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:52 crc kubenswrapper[5017]: I0129 06:50:52.029284 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:50:52 crc kubenswrapper[5017]: E0129 06:50:52.029378 5017 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 06:50:52 crc kubenswrapper[5017]: E0129 06:50:52.029394 5017 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 06:50:52 crc kubenswrapper[5017]: E0129 06:50:52.029452 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:51:08.029434717 +0000 UTC m=+954.403882327 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "webhook-server-cert" not found Jan 29 06:50:52 crc kubenswrapper[5017]: E0129 06:50:52.029469 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs podName:facf0821-eb7d-4510-bcb7-69387e467df9 nodeName:}" failed. No retries permitted until 2026-01-29 06:51:08.029464117 +0000 UTC m=+954.403911727 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-h4kbw" (UID: "facf0821-eb7d-4510-bcb7-69387e467df9") : secret "metrics-server-cert" not found Jan 29 06:50:52 crc kubenswrapper[5017]: E0129 06:50:52.164582 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" podUID="283799a2-6b66-4255-8864-3a561dd04e89" Jan 29 06:50:52 crc kubenswrapper[5017]: E0129 06:50:52.444890 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 29 06:50:52 crc kubenswrapper[5017]: E0129 06:50:52.445125 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jhqh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-sd7m9_openstack-operators(b51d682b-635c-44de-8d9e-945127aaeb63): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:50:52 crc kubenswrapper[5017]: E0129 06:50:52.446300 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" podUID="b51d682b-635c-44de-8d9e-945127aaeb63" Jan 29 06:50:53 crc kubenswrapper[5017]: E0129 06:50:53.047851 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 06:50:53 crc kubenswrapper[5017]: E0129 06:50:53.048478 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjcq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-7zkj8_openstack-operators(a348ad8b-f3a0-4639-9839-2bb062e77e29): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:50:53 crc kubenswrapper[5017]: E0129 06:50:53.049661 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" podUID="a348ad8b-f3a0-4639-9839-2bb062e77e29" Jan 29 06:50:53 crc kubenswrapper[5017]: E0129 06:50:53.173939 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" podUID="b51d682b-635c-44de-8d9e-945127aaeb63" Jan 29 06:50:53 crc kubenswrapper[5017]: E0129 06:50:53.175442 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" podUID="a348ad8b-f3a0-4639-9839-2bb062e77e29" Jan 29 06:50:53 crc kubenswrapper[5017]: E0129 06:50:53.654553 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 29 06:50:53 crc kubenswrapper[5017]: E0129 06:50:53.654775 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pj54b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-l6gbj_openstack-operators(a77928ad-eb54-45fc-a53e-b3f22cb62d53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:50:53 crc kubenswrapper[5017]: E0129 06:50:53.655931 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" podUID="a77928ad-eb54-45fc-a53e-b3f22cb62d53" Jan 29 06:50:53 crc kubenswrapper[5017]: I0129 06:50:53.942991 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-67c59"] Jan 29 06:50:53 crc kubenswrapper[5017]: W0129 06:50:53.957899 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c9c357e_634d_49c9_84bc_642deb32fa88.slice/crio-e3be73571fd92748fb38cb5e4d677543b6e44279ac1733e9309ad961cc9019ad WatchSource:0}: Error finding container e3be73571fd92748fb38cb5e4d677543b6e44279ac1733e9309ad961cc9019ad: Status 404 returned error can't find the container with id e3be73571fd92748fb38cb5e4d677543b6e44279ac1733e9309ad961cc9019ad Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.183903 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" event={"ID":"0c9c357e-634d-49c9-84bc-642deb32fa88","Type":"ContainerStarted","Data":"e3be73571fd92748fb38cb5e4d677543b6e44279ac1733e9309ad961cc9019ad"} Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.185892 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" event={"ID":"326882c7-bd9e-4141-95c3-e21dadfd560d","Type":"ContainerStarted","Data":"4ba876fe0d92081b4b34fd5dcbed1ab14674303c894d1a3b421d17078d774c1e"} Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.186919 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.195143 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" event={"ID":"5558b938-90cc-4177-ae13-4c8d6f65ea6d","Type":"ContainerStarted","Data":"a1d645efb0b1031ef642e2361e519905e2d136af11bd86092d8e440d3e279e08"} Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.195998 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.198562 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" event={"ID":"6b1e3dc5-6234-4b08-a023-459b6ef45d8a","Type":"ContainerStarted","Data":"3b1eb1523d4cbefc8133b1e7859d547d4c296f7fafb089b83d476c9d854fc33c"} Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.199030 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.203186 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" event={"ID":"bda8f50d-d263-450b-922d-9e9da95811b3","Type":"ContainerStarted","Data":"9a4bfce47eece677de2cc6bdf914524ca81cd7f39f15d6050af917c0d495b606"} Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.204004 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.210342 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" event={"ID":"0cf68843-4944-46e5-940e-03273a49fd0a","Type":"ContainerStarted","Data":"fc071e96eaa598630c05bb07694d1ca858dccec54f6d62abb275b60532da6a10"} Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.210456 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.217771 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" event={"ID":"2aa64d1e-6f8d-4c60-a26b-12ae9595051b","Type":"ContainerStarted","Data":"3d4457921759a52516a3a0118b1d984d305f40b94d07597dbe205f540e40df05"} Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.217996 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" Jan 29 06:50:54 crc kubenswrapper[5017]: E0129 06:50:54.226234 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" podUID="a77928ad-eb54-45fc-a53e-b3f22cb62d53" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.237791 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" podStartSLOduration=3.128114135 podStartE2EDuration="20.237765788s" podCreationTimestamp="2026-01-29 06:50:34 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.581941201 +0000 UTC m=+922.956388811" lastFinishedPulling="2026-01-29 06:50:53.691592854 +0000 UTC m=+940.066040464" observedRunningTime="2026-01-29 06:50:54.209839746 +0000 UTC m=+940.584287356" watchObservedRunningTime="2026-01-29 06:50:54.237765788 +0000 UTC m=+940.612213398" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.238773 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" podStartSLOduration=2.822656097 podStartE2EDuration="19.238764283s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:37.278610317 +0000 UTC m=+923.653057927" lastFinishedPulling="2026-01-29 06:50:53.694718503 +0000 UTC m=+940.069166113" observedRunningTime="2026-01-29 06:50:54.231829261 +0000 UTC m=+940.606276871" watchObservedRunningTime="2026-01-29 06:50:54.238764283 +0000 UTC m=+940.613211893" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.267017 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" podStartSLOduration=2.228441033 podStartE2EDuration="19.266936402s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.653029994 +0000 UTC m=+923.027477604" lastFinishedPulling="2026-01-29 06:50:53.691525363 +0000 UTC m=+940.065972973" observedRunningTime="2026-01-29 06:50:54.265427434 +0000 UTC m=+940.639875044" watchObservedRunningTime="2026-01-29 06:50:54.266936402 +0000 UTC m=+940.641384002" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.295755 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" podStartSLOduration=3.061552485 podStartE2EDuration="20.295725846s" podCreationTimestamp="2026-01-29 06:50:34 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.455591578 +0000 UTC m=+922.830039188" lastFinishedPulling="2026-01-29 06:50:53.689764939 +0000 UTC m=+940.064212549" observedRunningTime="2026-01-29 06:50:54.292374562 +0000 UTC m=+940.666822192" watchObservedRunningTime="2026-01-29 06:50:54.295725846 +0000 UTC m=+940.670173466" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.381550 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" podStartSLOduration=3.145528477 podStartE2EDuration="20.381522903s" podCreationTimestamp="2026-01-29 06:50:34 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.455204639 +0000 UTC m=+922.829652249" lastFinishedPulling="2026-01-29 06:50:53.691199065 +0000 UTC m=+940.065646675" observedRunningTime="2026-01-29 06:50:54.342814493 +0000 UTC m=+940.717262103" watchObservedRunningTime="2026-01-29 06:50:54.381522903 +0000 UTC m=+940.755970503" Jan 29 06:50:54 crc kubenswrapper[5017]: I0129 06:50:54.410589 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" podStartSLOduration=3.329010088 podStartE2EDuration="20.410561404s" podCreationTimestamp="2026-01-29 06:50:34 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.610835148 +0000 UTC m=+922.985282758" lastFinishedPulling="2026-01-29 06:50:53.692386464 +0000 UTC m=+940.066834074" observedRunningTime="2026-01-29 06:50:54.364092262 +0000 UTC m=+940.738539882" watchObservedRunningTime="2026-01-29 06:50:54.410561404 +0000 UTC m=+940.785009014" Jan 29 06:50:56 crc kubenswrapper[5017]: I0129 06:50:56.539461 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:50:56 crc kubenswrapper[5017]: I0129 06:50:56.540127 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.293578 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" event={"ID":"863f4dee-1272-4cb9-8ced-84a5114d64af","Type":"ContainerStarted","Data":"cc96e18f4411f4c23a09b481a318d5aee756a8a55970685568bce2b9fca816a2"} Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.295159 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.298621 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" event={"ID":"0c9c357e-634d-49c9-84bc-642deb32fa88","Type":"ContainerStarted","Data":"d9713e02051d13c4fb8ef6d62006d1db26c0c69a5dfad5dc5b4df17ec9e93937"} Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.298770 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.302507 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" event={"ID":"594ce113-eeb0-4eb4-9254-4f1695ced6c7","Type":"ContainerStarted","Data":"673dec3e44cff54e25895702fc8074588bbd81b5ec35ca89f69b106cdd0c2c1f"} Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.302779 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.304529 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" event={"ID":"b9a03454-a7c9-47c6-9eda-6cf83e3140d7","Type":"ContainerStarted","Data":"1f9e70179f4373d84ed248b283da41a804d1de0f137320816ffe1545ee2ca673"} Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.304762 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.306135 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" event={"ID":"ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c","Type":"ContainerStarted","Data":"e9afd5da2f9a55deb610e0fb57c4f8a2a643159eac61d715b6e796740f477324"} Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.306626 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.310034 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" event={"ID":"e8379d4d-67d5-42f0-8c28-f0d617723886","Type":"ContainerStarted","Data":"4ad1cd524ecc4187e253ea2f9d5c3b1d8371a24f16d437406218b846cb2e1c58"} Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.314206 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" event={"ID":"d7bc466f-b955-4c7a-a5dc-806e4a89b432","Type":"ContainerStarted","Data":"6649a228b3117a2e6fdb34663afc941e3d62f3c0078472fbf0b9e463dbeafebd"} Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.314550 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.334787 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" podStartSLOduration=2.607135264 podStartE2EDuration="27.334762137s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:37.043169789 +0000 UTC m=+923.417617409" lastFinishedPulling="2026-01-29 06:51:01.770796672 +0000 UTC m=+948.145244282" observedRunningTime="2026-01-29 06:51:02.330998444 +0000 UTC m=+948.705446054" watchObservedRunningTime="2026-01-29 06:51:02.334762137 +0000 UTC m=+948.709209747" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.410124 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" podStartSLOduration=2.742705016 podStartE2EDuration="27.410088835s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:37.025213114 +0000 UTC m=+923.399660724" lastFinishedPulling="2026-01-29 06:51:01.692596913 +0000 UTC m=+948.067044543" observedRunningTime="2026-01-29 06:51:02.366944875 +0000 UTC m=+948.741392665" watchObservedRunningTime="2026-01-29 06:51:02.410088835 +0000 UTC m=+948.784536445" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.423258 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-r8p8w" podStartSLOduration=3.032778019 podStartE2EDuration="27.423232201s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:37.302238403 +0000 UTC m=+923.676686013" lastFinishedPulling="2026-01-29 06:51:01.692692585 +0000 UTC m=+948.067140195" observedRunningTime="2026-01-29 06:51:02.400738413 +0000 UTC m=+948.775186023" watchObservedRunningTime="2026-01-29 06:51:02.423232201 +0000 UTC m=+948.797679811" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.439839 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" podStartSLOduration=2.922048873 podStartE2EDuration="27.439820142s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:37.175751636 +0000 UTC m=+923.550199246" lastFinishedPulling="2026-01-29 06:51:01.693522905 +0000 UTC m=+948.067970515" observedRunningTime="2026-01-29 06:51:02.435558346 +0000 UTC m=+948.810005956" watchObservedRunningTime="2026-01-29 06:51:02.439820142 +0000 UTC m=+948.814267752" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.509135 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" podStartSLOduration=2.99327322 podStartE2EDuration="27.509112081s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:37.176403913 +0000 UTC m=+923.550851523" lastFinishedPulling="2026-01-29 06:51:01.692242764 +0000 UTC m=+948.066690384" observedRunningTime="2026-01-29 06:51:02.474145194 +0000 UTC m=+948.848592804" watchObservedRunningTime="2026-01-29 06:51:02.509112081 +0000 UTC m=+948.883559701" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.596544 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" podStartSLOduration=3.070972675 podStartE2EDuration="27.596523848s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:37.171016959 +0000 UTC m=+923.545464569" lastFinishedPulling="2026-01-29 06:51:01.696568122 +0000 UTC m=+948.071015742" observedRunningTime="2026-01-29 06:51:02.594944409 +0000 UTC m=+948.969392019" watchObservedRunningTime="2026-01-29 06:51:02.596523848 +0000 UTC m=+948.970971458" Jan 29 06:51:02 crc kubenswrapper[5017]: I0129 06:51:02.600485 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" podStartSLOduration=19.87098099 podStartE2EDuration="27.600471286s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:53.962833131 +0000 UTC m=+940.337280741" lastFinishedPulling="2026-01-29 06:51:01.692323427 +0000 UTC m=+948.066771037" observedRunningTime="2026-01-29 06:51:02.562103555 +0000 UTC m=+948.936551165" watchObservedRunningTime="2026-01-29 06:51:02.600471286 +0000 UTC m=+948.974918896" Jan 29 06:51:03 crc kubenswrapper[5017]: I0129 06:51:03.324897 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" event={"ID":"f4577d7f-77c1-41dc-a6dc-37a8f967edd5","Type":"ContainerStarted","Data":"a8203bfaa86ca98b72cd10b127062ac067bfe52fa9a5d8f8232a7c6cc846b6af"} Jan 29 06:51:03 crc kubenswrapper[5017]: I0129 06:51:03.326651 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" Jan 29 06:51:03 crc kubenswrapper[5017]: I0129 06:51:03.359792 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" podStartSLOduration=2.249929126 podStartE2EDuration="28.359772005s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.681840099 +0000 UTC m=+923.056287709" lastFinishedPulling="2026-01-29 06:51:02.791682978 +0000 UTC m=+949.166130588" observedRunningTime="2026-01-29 06:51:03.355806727 +0000 UTC m=+949.730254337" watchObservedRunningTime="2026-01-29 06:51:03.359772005 +0000 UTC m=+949.734219615" Jan 29 06:51:04 crc kubenswrapper[5017]: I0129 06:51:04.336557 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" event={"ID":"f8aa8837-37c8-4461-bd3c-e2aae6e5dfab","Type":"ContainerStarted","Data":"8cae0100d7e1de599127017ffc9751c523ea2dcf73eaa684ae1e1dc7917a0296"} Jan 29 06:51:04 crc kubenswrapper[5017]: I0129 06:51:04.340278 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" Jan 29 06:51:04 crc kubenswrapper[5017]: I0129 06:51:04.405272 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" podStartSLOduration=2.522031944 podStartE2EDuration="29.405250131s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.871340178 +0000 UTC m=+923.245787788" lastFinishedPulling="2026-01-29 06:51:03.754558365 +0000 UTC m=+950.129005975" observedRunningTime="2026-01-29 06:51:04.399614801 +0000 UTC m=+950.774062421" watchObservedRunningTime="2026-01-29 06:51:04.405250131 +0000 UTC m=+950.779697731" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.340710 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-m9htc" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.346787 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" event={"ID":"4d8182ea-62eb-455e-b34c-e5028514c4e1","Type":"ContainerStarted","Data":"739634c70a924a01515dab870eb032afb4f5fc80321f368e7c3994f76a8e208a"} Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.347118 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.348823 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" event={"ID":"5aa1136e-d199-49c3-9bc3-5cbdaa19d552","Type":"ContainerStarted","Data":"854d73f40050f19dcb6fb6c66ccaf838dcb8a57335a5db9ce1abe37811d9d89a"} Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.349127 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.351258 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" event={"ID":"a348ad8b-f3a0-4639-9839-2bb062e77e29","Type":"ContainerStarted","Data":"2e8b776fabaf8bb5bee298078e26e5ba8dce6f252c61a3213dc940d026c843a9"} Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.351532 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.367298 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4vthv" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.367381 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-h4xwv" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.411193 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" podStartSLOduration=2.480530205 podStartE2EDuration="30.411171016s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.866351214 +0000 UTC m=+923.240798824" lastFinishedPulling="2026-01-29 06:51:04.796992025 +0000 UTC m=+951.171439635" observedRunningTime="2026-01-29 06:51:05.407996327 +0000 UTC m=+951.782443947" watchObservedRunningTime="2026-01-29 06:51:05.411171016 +0000 UTC m=+951.785618626" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.414468 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" podStartSLOduration=2.271116341 podStartE2EDuration="30.414458397s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.655938366 +0000 UTC m=+923.030385976" lastFinishedPulling="2026-01-29 06:51:04.799280422 +0000 UTC m=+951.173728032" observedRunningTime="2026-01-29 06:51:05.377755367 +0000 UTC m=+951.752202987" watchObservedRunningTime="2026-01-29 06:51:05.414458397 +0000 UTC m=+951.788906007" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.434477 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-7whnz" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.437805 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" podStartSLOduration=2.5226431590000002 podStartE2EDuration="30.437787206s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.881418148 +0000 UTC m=+923.255865758" lastFinishedPulling="2026-01-29 06:51:04.796562195 +0000 UTC m=+951.171009805" observedRunningTime="2026-01-29 06:51:05.432056623 +0000 UTC m=+951.806504233" watchObservedRunningTime="2026-01-29 06:51:05.437787206 +0000 UTC m=+951.812234806" Jan 29 06:51:05 crc kubenswrapper[5017]: I0129 06:51:05.530103 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7h92b" Jan 29 06:51:06 crc kubenswrapper[5017]: I0129 06:51:06.360927 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" event={"ID":"b51d682b-635c-44de-8d9e-945127aaeb63","Type":"ContainerStarted","Data":"ae72596998d1fb8f7d795ab5a3f7b631710ffe5da88c9c6d142b2c33d7dbc862"} Jan 29 06:51:06 crc kubenswrapper[5017]: I0129 06:51:06.361813 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" Jan 29 06:51:06 crc kubenswrapper[5017]: I0129 06:51:06.406895 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" podStartSLOduration=2.34445985 podStartE2EDuration="31.406871497s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:36.866709263 +0000 UTC m=+923.241156873" lastFinishedPulling="2026-01-29 06:51:05.92912091 +0000 UTC m=+952.303568520" observedRunningTime="2026-01-29 06:51:06.402297293 +0000 UTC m=+952.776744913" watchObservedRunningTime="2026-01-29 06:51:06.406871497 +0000 UTC m=+952.781319117" Jan 29 06:51:06 crc kubenswrapper[5017]: I0129 06:51:06.517212 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-kjrxb" Jan 29 06:51:07 crc kubenswrapper[5017]: I0129 06:51:07.379194 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" event={"ID":"283799a2-6b66-4255-8864-3a561dd04e89","Type":"ContainerStarted","Data":"5fb64a27f8cb966e8190a99ed5d30b47c1fd6379d7f423377423e58d6b361a30"} Jan 29 06:51:07 crc kubenswrapper[5017]: I0129 06:51:07.380258 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" Jan 29 06:51:07 crc kubenswrapper[5017]: I0129 06:51:07.415664 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" podStartSLOduration=2.601717249 podStartE2EDuration="32.415635541s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:37.131149901 +0000 UTC m=+923.505597511" lastFinishedPulling="2026-01-29 06:51:06.945068193 +0000 UTC m=+953.319515803" observedRunningTime="2026-01-29 06:51:07.414312699 +0000 UTC m=+953.788760329" watchObservedRunningTime="2026-01-29 06:51:07.415635541 +0000 UTC m=+953.790083171" Jan 29 06:51:07 crc kubenswrapper[5017]: I0129 06:51:07.774806 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:51:07 crc kubenswrapper[5017]: I0129 06:51:07.782683 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dd82efb-017d-4e70-86b1-f25e7026646a-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h\" (UID: \"7dd82efb-017d-4e70-86b1-f25e7026646a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:51:08 crc kubenswrapper[5017]: I0129 06:51:08.077571 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:51:08 crc kubenswrapper[5017]: I0129 06:51:08.077709 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:51:08 crc kubenswrapper[5017]: I0129 06:51:08.079740 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:51:08 crc kubenswrapper[5017]: I0129 06:51:08.082734 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:51:08 crc kubenswrapper[5017]: I0129 06:51:08.084149 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/facf0821-eb7d-4510-bcb7-69387e467df9-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-h4kbw\" (UID: \"facf0821-eb7d-4510-bcb7-69387e467df9\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:51:08 crc kubenswrapper[5017]: I0129 06:51:08.204294 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:51:08 crc kubenswrapper[5017]: I0129 06:51:08.411322 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h"] Jan 29 06:51:08 crc kubenswrapper[5017]: I0129 06:51:08.726836 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw"] Jan 29 06:51:08 crc kubenswrapper[5017]: W0129 06:51:08.731238 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfacf0821_eb7d_4510_bcb7_69387e467df9.slice/crio-e04ee34b634f42da55a95a4d9f725181c9c7877d5ec0e490e01dc4202936c34d WatchSource:0}: Error finding container e04ee34b634f42da55a95a4d9f725181c9c7877d5ec0e490e01dc4202936c34d: Status 404 returned error can't find the container with id e04ee34b634f42da55a95a4d9f725181c9c7877d5ec0e490e01dc4202936c34d Jan 29 06:51:09 crc kubenswrapper[5017]: I0129 06:51:09.410919 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" event={"ID":"a77928ad-eb54-45fc-a53e-b3f22cb62d53","Type":"ContainerStarted","Data":"d03298cc31352d0383f68f5f40274ec11a042740153d620c7729c19cbdc078f8"} Jan 29 06:51:09 crc kubenswrapper[5017]: I0129 06:51:09.411671 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" Jan 29 06:51:09 crc kubenswrapper[5017]: I0129 06:51:09.413842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" event={"ID":"facf0821-eb7d-4510-bcb7-69387e467df9","Type":"ContainerStarted","Data":"4ba7ce292e6b5b7d3e3322e97142fddc00ef751531dc4d879c0d177d7ab78bbd"} Jan 29 06:51:09 crc kubenswrapper[5017]: I0129 06:51:09.413880 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" event={"ID":"facf0821-eb7d-4510-bcb7-69387e467df9","Type":"ContainerStarted","Data":"e04ee34b634f42da55a95a4d9f725181c9c7877d5ec0e490e01dc4202936c34d"} Jan 29 06:51:09 crc kubenswrapper[5017]: I0129 06:51:09.413992 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:51:09 crc kubenswrapper[5017]: I0129 06:51:09.415601 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" event={"ID":"7dd82efb-017d-4e70-86b1-f25e7026646a","Type":"ContainerStarted","Data":"90c86aae3ddc39d3e9ae7b0b52e8efef58607d01feed6ec84edd0d7426d91e00"} Jan 29 06:51:09 crc kubenswrapper[5017]: I0129 06:51:09.425779 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" podStartSLOduration=2.667596344 podStartE2EDuration="34.425753939s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:50:37.019053971 +0000 UTC m=+923.393501581" lastFinishedPulling="2026-01-29 06:51:08.777211566 +0000 UTC m=+955.151659176" observedRunningTime="2026-01-29 06:51:09.424852266 +0000 UTC m=+955.799299876" watchObservedRunningTime="2026-01-29 06:51:09.425753939 +0000 UTC m=+955.800201549" Jan 29 06:51:09 crc kubenswrapper[5017]: I0129 06:51:09.469642 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" podStartSLOduration=34.469620276 podStartE2EDuration="34.469620276s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:51:09.46047813 +0000 UTC m=+955.834925750" watchObservedRunningTime="2026-01-29 06:51:09.469620276 +0000 UTC m=+955.844067886" Jan 29 06:51:11 crc kubenswrapper[5017]: I0129 06:51:11.435717 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" event={"ID":"7dd82efb-017d-4e70-86b1-f25e7026646a","Type":"ContainerStarted","Data":"d6c9deb1242f55fdbe3c9b4b2e6705d4eeabbd18c39bc0b4dc8445f0410d7a66"} Jan 29 06:51:11 crc kubenswrapper[5017]: I0129 06:51:11.436330 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:51:11 crc kubenswrapper[5017]: I0129 06:51:11.462340 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" podStartSLOduration=34.283259455 podStartE2EDuration="36.462322021s" podCreationTimestamp="2026-01-29 06:50:35 +0000 UTC" firstStartedPulling="2026-01-29 06:51:08.442392654 +0000 UTC m=+954.816840264" lastFinishedPulling="2026-01-29 06:51:10.62145522 +0000 UTC m=+956.995902830" observedRunningTime="2026-01-29 06:51:11.458922537 +0000 UTC m=+957.833370147" watchObservedRunningTime="2026-01-29 06:51:11.462322021 +0000 UTC m=+957.836769631" Jan 29 06:51:11 crc kubenswrapper[5017]: I0129 06:51:11.518246 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-67c59" Jan 29 06:51:15 crc kubenswrapper[5017]: I0129 06:51:15.543187 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7zkj8" Jan 29 06:51:15 crc kubenswrapper[5017]: I0129 06:51:15.665268 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-whkgr" Jan 29 06:51:15 crc kubenswrapper[5017]: I0129 06:51:15.722323 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-bxzj9" Jan 29 06:51:15 crc kubenswrapper[5017]: I0129 06:51:15.768528 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5ckck" Jan 29 06:51:15 crc kubenswrapper[5017]: I0129 06:51:15.842829 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hlh7p" Jan 29 06:51:15 crc kubenswrapper[5017]: I0129 06:51:15.932751 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-sd7m9" Jan 29 06:51:16 crc kubenswrapper[5017]: I0129 06:51:16.190910 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-bpdqt" Jan 29 06:51:16 crc kubenswrapper[5017]: I0129 06:51:16.221110 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-l6gbj" Jan 29 06:51:16 crc kubenswrapper[5017]: I0129 06:51:16.313106 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-lrhwt" Jan 29 06:51:16 crc kubenswrapper[5017]: I0129 06:51:16.314080 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-l6mcx" Jan 29 06:51:16 crc kubenswrapper[5017]: I0129 06:51:16.352447 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7d98" Jan 29 06:51:16 crc kubenswrapper[5017]: I0129 06:51:16.367238 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-wtmjj" Jan 29 06:51:16 crc kubenswrapper[5017]: I0129 06:51:16.433536 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-szcqv" Jan 29 06:51:18 crc kubenswrapper[5017]: I0129 06:51:18.087837 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h" Jan 29 06:51:18 crc kubenswrapper[5017]: I0129 06:51:18.217805 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-h4kbw" Jan 29 06:51:26 crc kubenswrapper[5017]: I0129 06:51:26.538989 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:51:26 crc kubenswrapper[5017]: I0129 06:51:26.539811 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:51:26 crc kubenswrapper[5017]: I0129 06:51:26.539909 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:51:26 crc kubenswrapper[5017]: I0129 06:51:26.540944 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f7da3626486d0c22b65bfd4936f285f08c55d6461ba11e5bddb44e28f11086f"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:51:26 crc kubenswrapper[5017]: I0129 06:51:26.541056 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://8f7da3626486d0c22b65bfd4936f285f08c55d6461ba11e5bddb44e28f11086f" gracePeriod=600 Jan 29 06:51:27 crc kubenswrapper[5017]: I0129 06:51:27.575326 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="8f7da3626486d0c22b65bfd4936f285f08c55d6461ba11e5bddb44e28f11086f" exitCode=0 Jan 29 06:51:27 crc kubenswrapper[5017]: I0129 06:51:27.575399 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"8f7da3626486d0c22b65bfd4936f285f08c55d6461ba11e5bddb44e28f11086f"} Jan 29 06:51:27 crc kubenswrapper[5017]: I0129 06:51:27.575771 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"70dfeea0251012308950e213d0ab72466a324bea818357cd6a2957c1747ca4d2"} Jan 29 06:51:27 crc kubenswrapper[5017]: I0129 06:51:27.575834 5017 scope.go:117] "RemoveContainer" containerID="b2452b80368892b3776d55e2c528464f9c2a090264bafafd3c6ec1fc4c343226" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.356358 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-4nxns"] Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.358490 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.362420 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.362506 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9x7ph" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.362759 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.362783 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.372350 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-4nxns"] Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.425521 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-2lstr"] Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.432516 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.434857 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.436945 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-2lstr"] Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.485225 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-config\") pod \"dnsmasq-dns-5f854695bc-2lstr\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.485594 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-dns-svc\") pod \"dnsmasq-dns-5f854695bc-2lstr\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.485684 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbkjv\" (UniqueName: \"kubernetes.io/projected/6e65fd67-ea35-422a-be2f-bf4f914fff56-kube-api-access-gbkjv\") pod \"dnsmasq-dns-84bb9d8bd9-4nxns\" (UID: \"6e65fd67-ea35-422a-be2f-bf4f914fff56\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.485790 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e65fd67-ea35-422a-be2f-bf4f914fff56-config\") pod \"dnsmasq-dns-84bb9d8bd9-4nxns\" (UID: \"6e65fd67-ea35-422a-be2f-bf4f914fff56\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.485878 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jss5n\" (UniqueName: \"kubernetes.io/projected/fba91fed-6ade-404d-8774-3269e450278b-kube-api-access-jss5n\") pod \"dnsmasq-dns-5f854695bc-2lstr\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.587607 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-dns-svc\") pod \"dnsmasq-dns-5f854695bc-2lstr\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.587682 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbkjv\" (UniqueName: \"kubernetes.io/projected/6e65fd67-ea35-422a-be2f-bf4f914fff56-kube-api-access-gbkjv\") pod \"dnsmasq-dns-84bb9d8bd9-4nxns\" (UID: \"6e65fd67-ea35-422a-be2f-bf4f914fff56\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.587712 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e65fd67-ea35-422a-be2f-bf4f914fff56-config\") pod \"dnsmasq-dns-84bb9d8bd9-4nxns\" (UID: \"6e65fd67-ea35-422a-be2f-bf4f914fff56\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.587746 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jss5n\" (UniqueName: \"kubernetes.io/projected/fba91fed-6ade-404d-8774-3269e450278b-kube-api-access-jss5n\") pod \"dnsmasq-dns-5f854695bc-2lstr\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.587805 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-config\") pod \"dnsmasq-dns-5f854695bc-2lstr\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.590101 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-dns-svc\") pod \"dnsmasq-dns-5f854695bc-2lstr\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.590221 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-config\") pod \"dnsmasq-dns-5f854695bc-2lstr\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.591640 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e65fd67-ea35-422a-be2f-bf4f914fff56-config\") pod \"dnsmasq-dns-84bb9d8bd9-4nxns\" (UID: \"6e65fd67-ea35-422a-be2f-bf4f914fff56\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.608639 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jss5n\" (UniqueName: \"kubernetes.io/projected/fba91fed-6ade-404d-8774-3269e450278b-kube-api-access-jss5n\") pod \"dnsmasq-dns-5f854695bc-2lstr\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.609864 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbkjv\" (UniqueName: \"kubernetes.io/projected/6e65fd67-ea35-422a-be2f-bf4f914fff56-kube-api-access-gbkjv\") pod \"dnsmasq-dns-84bb9d8bd9-4nxns\" (UID: \"6e65fd67-ea35-422a-be2f-bf4f914fff56\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.681060 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:51:32 crc kubenswrapper[5017]: I0129 06:51:32.749158 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.105982 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-2lstr"] Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.148619 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-2lstr"] Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.162032 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-khsnm"] Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.164275 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.179326 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-khsnm"] Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.212522 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-4nxns"] Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.307576 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-khsnm\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.307650 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-config\") pod \"dnsmasq-dns-c7cbb8f79-khsnm\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.307687 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9df4t\" (UniqueName: \"kubernetes.io/projected/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-kube-api-access-9df4t\") pod \"dnsmasq-dns-c7cbb8f79-khsnm\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.409492 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9df4t\" (UniqueName: \"kubernetes.io/projected/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-kube-api-access-9df4t\") pod \"dnsmasq-dns-c7cbb8f79-khsnm\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.411000 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-khsnm\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.411145 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-config\") pod \"dnsmasq-dns-c7cbb8f79-khsnm\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.413213 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-config\") pod \"dnsmasq-dns-c7cbb8f79-khsnm\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.413268 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-khsnm\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.428621 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9df4t\" (UniqueName: \"kubernetes.io/projected/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-kube-api-access-9df4t\") pod \"dnsmasq-dns-c7cbb8f79-khsnm\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.506467 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.654612 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-2lstr" event={"ID":"fba91fed-6ade-404d-8774-3269e450278b","Type":"ContainerStarted","Data":"4a138d133603793aafb36c5490006d1698baee629fd3e7cf673ad2ec243ec6dc"} Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.655453 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" event={"ID":"6e65fd67-ea35-422a-be2f-bf4f914fff56","Type":"ContainerStarted","Data":"3379cdbf546d41d44d6a586f143502e4700353d93d7d912a13f3bc5f84b74cc3"} Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.897873 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-4nxns"] Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.924781 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jm24m"] Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.926599 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:33 crc kubenswrapper[5017]: I0129 06:51:33.937727 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jm24m"] Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.025212 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-config\") pod \"dnsmasq-dns-95f5f6995-jm24m\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.025307 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-dns-svc\") pod \"dnsmasq-dns-95f5f6995-jm24m\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.025340 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tssx\" (UniqueName: \"kubernetes.io/projected/c8001b26-5c08-42b9-9d59-f4422d318af8-kube-api-access-4tssx\") pod \"dnsmasq-dns-95f5f6995-jm24m\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.026176 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-khsnm"] Jan 29 06:51:34 crc kubenswrapper[5017]: W0129 06:51:34.073386 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f8a4d22_67c6_4e51_9adb_6c96c07e01bb.slice/crio-1904b3717edc8623392cfc7d026f50dceac73a77a3544a3425eb7bc8ae6160a9 WatchSource:0}: Error finding container 1904b3717edc8623392cfc7d026f50dceac73a77a3544a3425eb7bc8ae6160a9: Status 404 returned error can't find the container with id 1904b3717edc8623392cfc7d026f50dceac73a77a3544a3425eb7bc8ae6160a9 Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.126543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-config\") pod \"dnsmasq-dns-95f5f6995-jm24m\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.126637 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-dns-svc\") pod \"dnsmasq-dns-95f5f6995-jm24m\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.126668 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tssx\" (UniqueName: \"kubernetes.io/projected/c8001b26-5c08-42b9-9d59-f4422d318af8-kube-api-access-4tssx\") pod \"dnsmasq-dns-95f5f6995-jm24m\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.127979 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-config\") pod \"dnsmasq-dns-95f5f6995-jm24m\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.129025 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-dns-svc\") pod \"dnsmasq-dns-95f5f6995-jm24m\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.172264 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tssx\" (UniqueName: \"kubernetes.io/projected/c8001b26-5c08-42b9-9d59-f4422d318af8-kube-api-access-4tssx\") pod \"dnsmasq-dns-95f5f6995-jm24m\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.285371 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.341172 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.351230 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.351372 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.353382 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2ggfr" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.355836 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.356541 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.356709 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.356905 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.356947 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.356991 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.434749 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d30b013f-453f-4282-8b22-2a5270027828-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.434797 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9jd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-kube-api-access-2h9jd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.434823 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.434869 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d30b013f-453f-4282-8b22-2a5270027828-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.434892 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.434925 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.434966 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.434992 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.435017 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.435078 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.435098 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.538484 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539536 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539590 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d30b013f-453f-4282-8b22-2a5270027828-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539612 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h9jd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-kube-api-access-2h9jd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539636 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539687 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d30b013f-453f-4282-8b22-2a5270027828-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539718 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539782 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539823 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539854 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.539906 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.540933 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.541265 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.541450 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.542856 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.542969 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.543708 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.549268 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d30b013f-453f-4282-8b22-2a5270027828-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.550250 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.556702 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d30b013f-453f-4282-8b22-2a5270027828-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.557642 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.562998 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h9jd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-kube-api-access-2h9jd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.572515 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.665434 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" event={"ID":"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb","Type":"ContainerStarted","Data":"1904b3717edc8623392cfc7d026f50dceac73a77a3544a3425eb7bc8ae6160a9"} Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.696304 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:51:34 crc kubenswrapper[5017]: I0129 06:51:34.830424 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jm24m"] Jan 29 06:51:34 crc kubenswrapper[5017]: W0129 06:51:34.842270 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8001b26_5c08_42b9_9d59_f4422d318af8.slice/crio-0477d5bec0de16d99bf717aa806141f3e7191eea49afd94d3bd158b6c1565ce5 WatchSource:0}: Error finding container 0477d5bec0de16d99bf717aa806141f3e7191eea49afd94d3bd158b6c1565ce5: Status 404 returned error can't find the container with id 0477d5bec0de16d99bf717aa806141f3e7191eea49afd94d3bd158b6c1565ce5 Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.043109 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.045209 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.048485 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.048563 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.048485 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.048997 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.049306 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.048616 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.056700 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-t2zb4" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.080587 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.121611 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177348 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177424 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177476 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177524 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177548 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177568 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177595 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177625 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177644 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177670 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.177708 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7bw\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-kube-api-access-5m7bw\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: W0129 06:51:35.191721 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd30b013f_453f_4282_8b22_2a5270027828.slice/crio-73bca61a4de82d0b94453a14f55affd2ad9d19c9af4fc95e53d4f3a04ce53200 WatchSource:0}: Error finding container 73bca61a4de82d0b94453a14f55affd2ad9d19c9af4fc95e53d4f3a04ce53200: Status 404 returned error can't find the container with id 73bca61a4de82d0b94453a14f55affd2ad9d19c9af4fc95e53d4f3a04ce53200 Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280048 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280120 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280173 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280201 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280217 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280238 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280266 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280285 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280307 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280348 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m7bw\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-kube-api-access-5m7bw\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280371 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280711 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280778 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.280945 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.282049 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.282091 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.288497 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.290680 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.292804 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.295773 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.300636 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m7bw\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-kube-api-access-5m7bw\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.308994 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.316877 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.375982 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.686413 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-jm24m" event={"ID":"c8001b26-5c08-42b9-9d59-f4422d318af8","Type":"ContainerStarted","Data":"0477d5bec0de16d99bf717aa806141f3e7191eea49afd94d3bd158b6c1565ce5"} Jan 29 06:51:35 crc kubenswrapper[5017]: I0129 06:51:35.696087 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d30b013f-453f-4282-8b22-2a5270027828","Type":"ContainerStarted","Data":"73bca61a4de82d0b94453a14f55affd2ad9d19c9af4fc95e53d4f3a04ce53200"} Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.030497 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 06:51:36 crc kubenswrapper[5017]: W0129 06:51:36.083744 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f660da1_1d97_4b3b_ae3f_fb7ee90bf25a.slice/crio-efc0ef0c88363f230aa57bc0b472deaabef0bff3fed1c77fa3da2720414d7710 WatchSource:0}: Error finding container efc0ef0c88363f230aa57bc0b472deaabef0bff3fed1c77fa3da2720414d7710: Status 404 returned error can't find the container with id efc0ef0c88363f230aa57bc0b472deaabef0bff3fed1c77fa3da2720414d7710 Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.650141 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.651507 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.662494 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.663094 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.666669 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-nkr49" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.666869 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.667024 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.667339 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.715903 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a","Type":"ContainerStarted","Data":"efc0ef0c88363f230aa57bc0b472deaabef0bff3fed1c77fa3da2720414d7710"} Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.809653 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.809723 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.809755 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.810116 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.810175 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brhn7\" (UniqueName: \"kubernetes.io/projected/9af88cca-e43b-483d-beae-d6a56940aff7-kube-api-access-brhn7\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.810203 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-default\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.810365 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.810635 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-kolla-config\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.913166 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-default\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.913250 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.913332 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-kolla-config\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.913363 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.913399 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.913419 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.913454 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.913472 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brhn7\" (UniqueName: \"kubernetes.io/projected/9af88cca-e43b-483d-beae-d6a56940aff7-kube-api-access-brhn7\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.917582 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.924059 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.925613 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-default\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.926890 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-kolla-config\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.927064 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.938018 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brhn7\" (UniqueName: \"kubernetes.io/projected/9af88cca-e43b-483d-beae-d6a56940aff7-kube-api-access-brhn7\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.938018 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.945838 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:36 crc kubenswrapper[5017]: I0129 06:51:36.947013 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " pod="openstack/openstack-galera-0" Jan 29 06:51:37 crc kubenswrapper[5017]: I0129 06:51:37.024616 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 06:51:37 crc kubenswrapper[5017]: I0129 06:51:37.628889 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 06:51:37 crc kubenswrapper[5017]: W0129 06:51:37.706612 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af88cca_e43b_483d_beae_d6a56940aff7.slice/crio-0fec5135c279637e92e3b0c3d2b0bf33aa48e0bf317089e9a5a7e6486f6514c8 WatchSource:0}: Error finding container 0fec5135c279637e92e3b0c3d2b0bf33aa48e0bf317089e9a5a7e6486f6514c8: Status 404 returned error can't find the container with id 0fec5135c279637e92e3b0c3d2b0bf33aa48e0bf317089e9a5a7e6486f6514c8 Jan 29 06:51:37 crc kubenswrapper[5017]: I0129 06:51:37.740351 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af88cca-e43b-483d-beae-d6a56940aff7","Type":"ContainerStarted","Data":"0fec5135c279637e92e3b0c3d2b0bf33aa48e0bf317089e9a5a7e6486f6514c8"} Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.024502 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.026449 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.032033 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.032777 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wjt4v" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.032938 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.033447 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.059241 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.136969 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqk5q\" (UniqueName: \"kubernetes.io/projected/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kube-api-access-mqk5q\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.137029 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.137066 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.137097 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.137132 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.137158 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.137198 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.138209 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.239918 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.240020 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.242986 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.243040 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqk5q\" (UniqueName: \"kubernetes.io/projected/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kube-api-access-mqk5q\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.243176 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.243268 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.243369 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.243448 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.254494 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.254495 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.261064 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.261526 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.272936 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.284132 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.286801 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.291495 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqk5q\" (UniqueName: \"kubernetes.io/projected/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kube-api-access-mqk5q\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.295496 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.314646 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.315928 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bkhv4" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.316428 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.318833 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.338779 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.346452 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.346557 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-config-data\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.346647 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbh96\" (UniqueName: \"kubernetes.io/projected/cc46a149-0256-4061-9e32-936b2ec12588-kube-api-access-tbh96\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.346695 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-kolla-config\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.346722 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.376773 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.386496 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.451479 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-config-data\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.451708 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbh96\" (UniqueName: \"kubernetes.io/projected/cc46a149-0256-4061-9e32-936b2ec12588-kube-api-access-tbh96\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.451792 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-kolla-config\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.451836 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.452144 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.459163 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-config-data\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.461508 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.462326 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-kolla-config\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.470549 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.484664 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbh96\" (UniqueName: \"kubernetes.io/projected/cc46a149-0256-4061-9e32-936b2ec12588-kube-api-access-tbh96\") pod \"memcached-0\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.703038 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 06:51:38 crc kubenswrapper[5017]: I0129 06:51:38.890373 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 06:51:39 crc kubenswrapper[5017]: I0129 06:51:39.465741 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 06:51:39 crc kubenswrapper[5017]: W0129 06:51:39.492784 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc46a149_0256_4061_9e32_936b2ec12588.slice/crio-6cfd9eabfdf0d74b868de543d9950402606b69f655914f3f12b54ce3ced9f61f WatchSource:0}: Error finding container 6cfd9eabfdf0d74b868de543d9950402606b69f655914f3f12b54ce3ced9f61f: Status 404 returned error can't find the container with id 6cfd9eabfdf0d74b868de543d9950402606b69f655914f3f12b54ce3ced9f61f Jan 29 06:51:39 crc kubenswrapper[5017]: I0129 06:51:39.816668 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cc46a149-0256-4061-9e32-936b2ec12588","Type":"ContainerStarted","Data":"6cfd9eabfdf0d74b868de543d9950402606b69f655914f3f12b54ce3ced9f61f"} Jan 29 06:51:39 crc kubenswrapper[5017]: I0129 06:51:39.827997 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec5c09bc-f98c-4587-b4e3-ec9269c04a71","Type":"ContainerStarted","Data":"b6f89290b0ead73d280d1ba42ae5b55ae34456852e504b13a8dff845a0427d80"} Jan 29 06:51:40 crc kubenswrapper[5017]: I0129 06:51:40.144534 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:51:40 crc kubenswrapper[5017]: I0129 06:51:40.149123 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 06:51:40 crc kubenswrapper[5017]: I0129 06:51:40.152273 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:51:40 crc kubenswrapper[5017]: I0129 06:51:40.154580 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-9qjnz" Jan 29 06:51:40 crc kubenswrapper[5017]: I0129 06:51:40.205412 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cqn2\" (UniqueName: \"kubernetes.io/projected/e0b13e83-038a-4d46-8a03-48f09dc18e43-kube-api-access-2cqn2\") pod \"kube-state-metrics-0\" (UID: \"e0b13e83-038a-4d46-8a03-48f09dc18e43\") " pod="openstack/kube-state-metrics-0" Jan 29 06:51:40 crc kubenswrapper[5017]: I0129 06:51:40.313481 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cqn2\" (UniqueName: \"kubernetes.io/projected/e0b13e83-038a-4d46-8a03-48f09dc18e43-kube-api-access-2cqn2\") pod \"kube-state-metrics-0\" (UID: \"e0b13e83-038a-4d46-8a03-48f09dc18e43\") " pod="openstack/kube-state-metrics-0" Jan 29 06:51:40 crc kubenswrapper[5017]: I0129 06:51:40.342272 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cqn2\" (UniqueName: \"kubernetes.io/projected/e0b13e83-038a-4d46-8a03-48f09dc18e43-kube-api-access-2cqn2\") pod \"kube-state-metrics-0\" (UID: \"e0b13e83-038a-4d46-8a03-48f09dc18e43\") " pod="openstack/kube-state-metrics-0" Jan 29 06:51:40 crc kubenswrapper[5017]: I0129 06:51:40.487588 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.270135 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rtkrb"] Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.271786 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.274911 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.275664 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.275749 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2hwpv" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.287375 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtkrb"] Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.301518 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mrhnf"] Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.303397 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.327991 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mrhnf"] Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.405334 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-ovn-controller-tls-certs\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.405574 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-scripts\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.405666 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-etc-ovs\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.405740 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89q2r\" (UniqueName: \"kubernetes.io/projected/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-kube-api-access-89q2r\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.405795 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-run\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.405826 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c57c864-37e8-46b9-b30d-1762f3858984-scripts\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.405879 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-log\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.405904 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-log-ovn\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.405938 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c65nd\" (UniqueName: \"kubernetes.io/projected/4c57c864-37e8-46b9-b30d-1762f3858984-kube-api-access-c65nd\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.406019 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run-ovn\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.406051 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.406072 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-lib\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.406100 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-combined-ca-bundle\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507143 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c57c864-37e8-46b9-b30d-1762f3858984-scripts\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507217 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-log\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507241 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-log-ovn\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507270 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c65nd\" (UniqueName: \"kubernetes.io/projected/4c57c864-37e8-46b9-b30d-1762f3858984-kube-api-access-c65nd\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507324 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run-ovn\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507351 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-lib\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507371 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507397 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-combined-ca-bundle\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507427 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-ovn-controller-tls-certs\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507473 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-scripts\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507499 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-etc-ovs\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507528 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89q2r\" (UniqueName: \"kubernetes.io/projected/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-kube-api-access-89q2r\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.507566 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-run\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.508010 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-log\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.508063 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-log-ovn\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.508113 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-run\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.508173 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run-ovn\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.508580 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-etc-ovs\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.508723 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-lib\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.508834 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.510691 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-scripts\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.524098 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-combined-ca-bundle\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.525325 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-ovn-controller-tls-certs\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.528263 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c57c864-37e8-46b9-b30d-1762f3858984-scripts\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.529848 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c65nd\" (UniqueName: \"kubernetes.io/projected/4c57c864-37e8-46b9-b30d-1762f3858984-kube-api-access-c65nd\") pod \"ovn-controller-rtkrb\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.531235 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89q2r\" (UniqueName: \"kubernetes.io/projected/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-kube-api-access-89q2r\") pod \"ovn-controller-ovs-mrhnf\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.598072 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb" Jan 29 06:51:43 crc kubenswrapper[5017]: I0129 06:51:43.622422 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.168562 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.170564 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.174097 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.174155 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-chc76" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.174261 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.174581 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.176016 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.178888 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.323668 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.323792 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.323856 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.323979 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.324065 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2pk9\" (UniqueName: \"kubernetes.io/projected/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-kube-api-access-w2pk9\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.324108 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.324155 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.324200 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-config\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.427881 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.428062 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2pk9\" (UniqueName: \"kubernetes.io/projected/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-kube-api-access-w2pk9\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.428091 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.428143 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.428169 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-config\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.430347 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.430916 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-config\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.432370 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.433166 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.433738 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.434832 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.436587 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.442060 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.444995 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.449135 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.453184 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2pk9\" (UniqueName: \"kubernetes.io/projected/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-kube-api-access-w2pk9\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.473854 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:44 crc kubenswrapper[5017]: I0129 06:51:44.493247 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.255641 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.261380 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.265502 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.267082 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.267348 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.267531 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hngbq" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.285390 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.320540 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9v65\" (UniqueName: \"kubernetes.io/projected/08c15cf8-f386-428a-a94a-c33598b182a9-kube-api-access-x9v65\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.320612 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.320678 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.320735 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.321042 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.321144 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.321232 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.321287 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.423715 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.423811 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.423859 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.423897 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.423920 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.423966 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.423987 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.424027 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9v65\" (UniqueName: \"kubernetes.io/projected/08c15cf8-f386-428a-a94a-c33598b182a9-kube-api-access-x9v65\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.424244 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.425237 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.425549 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.426101 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.432144 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.432185 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.442466 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.445127 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9v65\" (UniqueName: \"kubernetes.io/projected/08c15cf8-f386-428a-a94a-c33598b182a9-kube-api-access-x9v65\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.457871 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:48 crc kubenswrapper[5017]: I0129 06:51:48.597440 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 06:51:56 crc kubenswrapper[5017]: E0129 06:51:56.401338 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 29 06:51:56 crc kubenswrapper[5017]: E0129 06:51:56.402688 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2h9jd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(d30b013f-453f-4282-8b22-2a5270027828): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:51:56 crc kubenswrapper[5017]: E0129 06:51:56.404222 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d30b013f-453f-4282-8b22-2a5270027828" Jan 29 06:51:56 crc kubenswrapper[5017]: E0129 06:51:56.410156 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 29 06:51:56 crc kubenswrapper[5017]: E0129 06:51:56.410429 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5m7bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:51:56 crc kubenswrapper[5017]: E0129 06:51:56.411878 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" Jan 29 06:51:57 crc kubenswrapper[5017]: E0129 06:51:57.008480 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" Jan 29 06:51:57 crc kubenswrapper[5017]: E0129 06:51:57.009136 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d30b013f-453f-4282-8b22-2a5270027828" Jan 29 06:51:58 crc kubenswrapper[5017]: E0129 06:51:58.579633 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 29 06:51:58 crc kubenswrapper[5017]: E0129 06:51:58.580300 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brhn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(9af88cca-e43b-483d-beae-d6a56940aff7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:51:58 crc kubenswrapper[5017]: E0129 06:51:58.582311 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="9af88cca-e43b-483d-beae-d6a56940aff7" Jan 29 06:51:59 crc kubenswrapper[5017]: E0129 06:51:59.022074 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="9af88cca-e43b-483d-beae-d6a56940aff7" Jan 29 06:51:59 crc kubenswrapper[5017]: E0129 06:51:59.380977 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 29 06:51:59 crc kubenswrapper[5017]: E0129 06:51:59.381680 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5d4h586h545h698h65bh68bh576h677h65ch699h84h5ffh645h95h68fh589h5d4h579h579h56fh694hbh56dh595h689h5d8hcdh57ch59chcdh5dbh55q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbh96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(cc46a149-0256-4061-9e32-936b2ec12588): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:51:59 crc kubenswrapper[5017]: E0129 06:51:59.383345 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="cc46a149-0256-4061-9e32-936b2ec12588" Jan 29 06:51:59 crc kubenswrapper[5017]: E0129 06:51:59.390401 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 29 06:51:59 crc kubenswrapper[5017]: E0129 06:51:59.390969 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqk5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(ec5c09bc-f98c-4587-b4e3-ec9269c04a71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:51:59 crc kubenswrapper[5017]: E0129 06:51:59.394194 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" Jan 29 06:52:00 crc kubenswrapper[5017]: E0129 06:52:00.046401 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="cc46a149-0256-4061-9e32-936b2ec12588" Jan 29 06:52:00 crc kubenswrapper[5017]: E0129 06:52:00.052534 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" Jan 29 06:52:04 crc kubenswrapper[5017]: E0129 06:52:04.997612 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:04.998409 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9df4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-c7cbb8f79-khsnm_openstack(8f8a4d22-67c6-4e51-9adb-6c96c07e01bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:04.999891 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" podUID="8f8a4d22-67c6-4e51-9adb-6c96c07e01bb" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.007633 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.007899 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tssx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-jm24m_openstack(c8001b26-5c08-42b9-9d59-f4422d318af8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.012526 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-jm24m" podUID="c8001b26-5c08-42b9-9d59-f4422d318af8" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.025151 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.025320 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbkjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-4nxns_openstack(6e65fd67-ea35-422a-be2f-bf4f914fff56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.026466 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" podUID="6e65fd67-ea35-422a-be2f-bf4f914fff56" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.046845 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.047026 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jss5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-2lstr_openstack(fba91fed-6ade-404d-8774-3269e450278b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.048415 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-2lstr" podUID="fba91fed-6ade-404d-8774-3269e450278b" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.101373 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" podUID="8f8a4d22-67c6-4e51-9adb-6c96c07e01bb" Jan 29 06:52:05 crc kubenswrapper[5017]: E0129 06:52:05.101712 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-jm24m" podUID="c8001b26-5c08-42b9-9d59-f4422d318af8" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.581481 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtkrb"] Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.601421 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.711487 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.718931 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.789704 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:52:05 crc kubenswrapper[5017]: W0129 06:52:05.789864 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0b13e83_038a_4d46_8a03_48f09dc18e43.slice/crio-813927801d9778f7647be6eb82d23767b4830daf4233271ec53c80c3c357bf13 WatchSource:0}: Error finding container 813927801d9778f7647be6eb82d23767b4830daf4233271ec53c80c3c357bf13: Status 404 returned error can't find the container with id 813927801d9778f7647be6eb82d23767b4830daf4233271ec53c80c3c357bf13 Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.811157 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mrhnf"] Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.852259 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-dns-svc\") pod \"fba91fed-6ade-404d-8774-3269e450278b\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.852400 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbkjv\" (UniqueName: \"kubernetes.io/projected/6e65fd67-ea35-422a-be2f-bf4f914fff56-kube-api-access-gbkjv\") pod \"6e65fd67-ea35-422a-be2f-bf4f914fff56\" (UID: \"6e65fd67-ea35-422a-be2f-bf4f914fff56\") " Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.852468 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e65fd67-ea35-422a-be2f-bf4f914fff56-config\") pod \"6e65fd67-ea35-422a-be2f-bf4f914fff56\" (UID: \"6e65fd67-ea35-422a-be2f-bf4f914fff56\") " Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.852558 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jss5n\" (UniqueName: \"kubernetes.io/projected/fba91fed-6ade-404d-8774-3269e450278b-kube-api-access-jss5n\") pod \"fba91fed-6ade-404d-8774-3269e450278b\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.852687 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-config\") pod \"fba91fed-6ade-404d-8774-3269e450278b\" (UID: \"fba91fed-6ade-404d-8774-3269e450278b\") " Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.853009 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fba91fed-6ade-404d-8774-3269e450278b" (UID: "fba91fed-6ade-404d-8774-3269e450278b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.853106 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e65fd67-ea35-422a-be2f-bf4f914fff56-config" (OuterVolumeSpecName: "config") pod "6e65fd67-ea35-422a-be2f-bf4f914fff56" (UID: "6e65fd67-ea35-422a-be2f-bf4f914fff56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.853147 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-config" (OuterVolumeSpecName: "config") pod "fba91fed-6ade-404d-8774-3269e450278b" (UID: "fba91fed-6ade-404d-8774-3269e450278b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.853701 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.853731 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba91fed-6ade-404d-8774-3269e450278b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.853741 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e65fd67-ea35-422a-be2f-bf4f914fff56-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.859445 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba91fed-6ade-404d-8774-3269e450278b-kube-api-access-jss5n" (OuterVolumeSpecName: "kube-api-access-jss5n") pod "fba91fed-6ade-404d-8774-3269e450278b" (UID: "fba91fed-6ade-404d-8774-3269e450278b"). InnerVolumeSpecName "kube-api-access-jss5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.859616 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e65fd67-ea35-422a-be2f-bf4f914fff56-kube-api-access-gbkjv" (OuterVolumeSpecName: "kube-api-access-gbkjv") pod "6e65fd67-ea35-422a-be2f-bf4f914fff56" (UID: "6e65fd67-ea35-422a-be2f-bf4f914fff56"). InnerVolumeSpecName "kube-api-access-gbkjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.884225 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.955944 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbkjv\" (UniqueName: \"kubernetes.io/projected/6e65fd67-ea35-422a-be2f-bf4f914fff56-kube-api-access-gbkjv\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:05 crc kubenswrapper[5017]: I0129 06:52:05.956143 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jss5n\" (UniqueName: \"kubernetes.io/projected/fba91fed-6ade-404d-8774-3269e450278b-kube-api-access-jss5n\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.102574 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.102591 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-4nxns" event={"ID":"6e65fd67-ea35-422a-be2f-bf4f914fff56","Type":"ContainerDied","Data":"3379cdbf546d41d44d6a586f143502e4700353d93d7d912a13f3bc5f84b74cc3"} Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.106082 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e0b13e83-038a-4d46-8a03-48f09dc18e43","Type":"ContainerStarted","Data":"813927801d9778f7647be6eb82d23767b4830daf4233271ec53c80c3c357bf13"} Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.107975 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-2lstr" event={"ID":"fba91fed-6ade-404d-8774-3269e450278b","Type":"ContainerDied","Data":"4a138d133603793aafb36c5490006d1698baee629fd3e7cf673ad2ec243ec6dc"} Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.108088 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-2lstr" Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.110086 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mrhnf" event={"ID":"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b","Type":"ContainerStarted","Data":"adb8db4c014230916e54c16802b5b3256625e42c4eae64360e39d804ad5a1d3c"} Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.111890 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"58a02d03-f3a8-4193-ba1d-623ecaa62fe9","Type":"ContainerStarted","Data":"257c03e8b266b9980b73d4a25d25ca786ed3703bb6b78721aadea28a4e400741"} Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.113354 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb" event={"ID":"4c57c864-37e8-46b9-b30d-1762f3858984","Type":"ContainerStarted","Data":"3a688b9e84a4c0454301d83a62563ec3756b59cb38ace244748884f01028e25f"} Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.189169 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-4nxns"] Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.208934 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-4nxns"] Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.224977 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-2lstr"] Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.231932 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-2lstr"] Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.328070 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e65fd67-ea35-422a-be2f-bf4f914fff56" path="/var/lib/kubelet/pods/6e65fd67-ea35-422a-be2f-bf4f914fff56/volumes" Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.328731 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba91fed-6ade-404d-8774-3269e450278b" path="/var/lib/kubelet/pods/fba91fed-6ade-404d-8774-3269e450278b/volumes" Jan 29 06:52:06 crc kubenswrapper[5017]: I0129 06:52:06.652029 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 06:52:07 crc kubenswrapper[5017]: W0129 06:52:07.129529 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c15cf8_f386_428a_a94a_c33598b182a9.slice/crio-b218be6dd477f68645c3bee8616fad21795a1b638eaff7f69a46aa2776a00322 WatchSource:0}: Error finding container b218be6dd477f68645c3bee8616fad21795a1b638eaff7f69a46aa2776a00322: Status 404 returned error can't find the container with id b218be6dd477f68645c3bee8616fad21795a1b638eaff7f69a46aa2776a00322 Jan 29 06:52:08 crc kubenswrapper[5017]: I0129 06:52:08.139702 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"08c15cf8-f386-428a-a94a-c33598b182a9","Type":"ContainerStarted","Data":"b218be6dd477f68645c3bee8616fad21795a1b638eaff7f69a46aa2776a00322"} Jan 29 06:52:11 crc kubenswrapper[5017]: I0129 06:52:11.176796 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"08c15cf8-f386-428a-a94a-c33598b182a9","Type":"ContainerStarted","Data":"96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac"} Jan 29 06:52:11 crc kubenswrapper[5017]: I0129 06:52:11.179409 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e0b13e83-038a-4d46-8a03-48f09dc18e43","Type":"ContainerStarted","Data":"f82af6768cb039827e81b07eb73693e5eba73d98bc49a2556e0d429cec64be8c"} Jan 29 06:52:11 crc kubenswrapper[5017]: I0129 06:52:11.180200 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 06:52:11 crc kubenswrapper[5017]: I0129 06:52:11.181695 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mrhnf" event={"ID":"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b","Type":"ContainerStarted","Data":"4ab1a2f79367075930dce611800526649c246682eb93e1f1f2b26e57e36b8756"} Jan 29 06:52:11 crc kubenswrapper[5017]: I0129 06:52:11.184083 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"58a02d03-f3a8-4193-ba1d-623ecaa62fe9","Type":"ContainerStarted","Data":"8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735"} Jan 29 06:52:11 crc kubenswrapper[5017]: I0129 06:52:11.187197 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb" event={"ID":"4c57c864-37e8-46b9-b30d-1762f3858984","Type":"ContainerStarted","Data":"050af2fa573ee2d4aef1b31ce94271336586d6f6093ebcaf8cd51c675652f905"} Jan 29 06:52:11 crc kubenswrapper[5017]: I0129 06:52:11.187424 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rtkrb" Jan 29 06:52:11 crc kubenswrapper[5017]: I0129 06:52:11.204086 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=26.950111905 podStartE2EDuration="31.20406184s" podCreationTimestamp="2026-01-29 06:51:40 +0000 UTC" firstStartedPulling="2026-01-29 06:52:05.796371266 +0000 UTC m=+1012.170818886" lastFinishedPulling="2026-01-29 06:52:10.050321171 +0000 UTC m=+1016.424768821" observedRunningTime="2026-01-29 06:52:11.200729018 +0000 UTC m=+1017.575176618" watchObservedRunningTime="2026-01-29 06:52:11.20406184 +0000 UTC m=+1017.578509450" Jan 29 06:52:11 crc kubenswrapper[5017]: I0129 06:52:11.225860 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rtkrb" podStartSLOduration=24.610039594 podStartE2EDuration="28.225833243s" podCreationTimestamp="2026-01-29 06:51:43 +0000 UTC" firstStartedPulling="2026-01-29 06:52:05.601197969 +0000 UTC m=+1011.975645579" lastFinishedPulling="2026-01-29 06:52:09.216991618 +0000 UTC m=+1015.591439228" observedRunningTime="2026-01-29 06:52:11.219073327 +0000 UTC m=+1017.593520937" watchObservedRunningTime="2026-01-29 06:52:11.225833243 +0000 UTC m=+1017.600280853" Jan 29 06:52:12 crc kubenswrapper[5017]: I0129 06:52:12.204276 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a","Type":"ContainerStarted","Data":"5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb"} Jan 29 06:52:12 crc kubenswrapper[5017]: I0129 06:52:12.206912 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d30b013f-453f-4282-8b22-2a5270027828","Type":"ContainerStarted","Data":"afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913"} Jan 29 06:52:12 crc kubenswrapper[5017]: I0129 06:52:12.209425 5017 generic.go:334] "Generic (PLEG): container finished" podID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerID="4ab1a2f79367075930dce611800526649c246682eb93e1f1f2b26e57e36b8756" exitCode=0 Jan 29 06:52:12 crc kubenswrapper[5017]: I0129 06:52:12.209838 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mrhnf" event={"ID":"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b","Type":"ContainerDied","Data":"4ab1a2f79367075930dce611800526649c246682eb93e1f1f2b26e57e36b8756"} Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.231534 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mrhnf" event={"ID":"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b","Type":"ContainerStarted","Data":"f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1"} Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.232509 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mrhnf" event={"ID":"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b","Type":"ContainerStarted","Data":"1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa"} Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.232569 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.232594 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.237472 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"58a02d03-f3a8-4193-ba1d-623ecaa62fe9","Type":"ContainerStarted","Data":"dd1494bb8f06d376a772d0890f42484f96047c14209cf64f5a4fb14363143583"} Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.245039 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec5c09bc-f98c-4587-b4e3-ec9269c04a71","Type":"ContainerStarted","Data":"541eef9ed8a601fb50010147d0b92594e48943cf7dfddc1493041868a74ebb85"} Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.248618 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af88cca-e43b-483d-beae-d6a56940aff7","Type":"ContainerStarted","Data":"3bbbc1fc8a72c66dc23676db10ea62a991b138c8e95fdb7d35a472153c5b43f7"} Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.253853 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"08c15cf8-f386-428a-a94a-c33598b182a9","Type":"ContainerStarted","Data":"970018d4bd2130b0277d105c14cb252f0b9fe0e15de053637b5663a6a7609e01"} Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.262364 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mrhnf" podStartSLOduration=28.005355087 podStartE2EDuration="31.262342157s" podCreationTimestamp="2026-01-29 06:51:43 +0000 UTC" firstStartedPulling="2026-01-29 06:52:05.819189375 +0000 UTC m=+1012.193636985" lastFinishedPulling="2026-01-29 06:52:09.076176435 +0000 UTC m=+1015.450624055" observedRunningTime="2026-01-29 06:52:14.252808814 +0000 UTC m=+1020.627256424" watchObservedRunningTime="2026-01-29 06:52:14.262342157 +0000 UTC m=+1020.636789777" Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.283213 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.758360004 podStartE2EDuration="31.283180679s" podCreationTimestamp="2026-01-29 06:51:43 +0000 UTC" firstStartedPulling="2026-01-29 06:52:05.890191876 +0000 UTC m=+1012.264639486" lastFinishedPulling="2026-01-29 06:52:13.415012551 +0000 UTC m=+1019.789460161" observedRunningTime="2026-01-29 06:52:14.278121524 +0000 UTC m=+1020.652569134" watchObservedRunningTime="2026-01-29 06:52:14.283180679 +0000 UTC m=+1020.657628289" Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.415625 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=21.134528386 podStartE2EDuration="27.415607105s" podCreationTimestamp="2026-01-29 06:51:47 +0000 UTC" firstStartedPulling="2026-01-29 06:52:07.133678346 +0000 UTC m=+1013.508125956" lastFinishedPulling="2026-01-29 06:52:13.414757065 +0000 UTC m=+1019.789204675" observedRunningTime="2026-01-29 06:52:14.411415262 +0000 UTC m=+1020.785862872" watchObservedRunningTime="2026-01-29 06:52:14.415607105 +0000 UTC m=+1020.790054715" Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.494143 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.494575 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 06:52:14 crc kubenswrapper[5017]: I0129 06:52:14.535529 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.265039 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cc46a149-0256-4061-9e32-936b2ec12588","Type":"ContainerStarted","Data":"7c791894b1734b0ef6f635f2bdcbd5ede8f7115df23c5068ed2fb5212e72b15f"} Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.265778 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.288279 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.00359712 podStartE2EDuration="37.288255143s" podCreationTimestamp="2026-01-29 06:51:38 +0000 UTC" firstStartedPulling="2026-01-29 06:51:39.498861623 +0000 UTC m=+985.873309233" lastFinishedPulling="2026-01-29 06:52:14.783519626 +0000 UTC m=+1021.157967256" observedRunningTime="2026-01-29 06:52:15.285657619 +0000 UTC m=+1021.660105269" watchObservedRunningTime="2026-01-29 06:52:15.288255143 +0000 UTC m=+1021.662702753" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.310782 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.596267 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jm24m"] Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.598564 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.659414 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-bnw77"] Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.660523 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.667261 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7878659675-sqsrk"] Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.668524 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.669396 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.672402 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.673941 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.677005 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-sqsrk"] Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.695072 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bnw77"] Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.774000 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572c6985-85a2-4a6d-8581-75b8c6b87322-config\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.774125 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.774635 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-dns-svc\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.774849 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovn-rundir\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.774929 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7hl\" (UniqueName: \"kubernetes.io/projected/572c6985-85a2-4a6d-8581-75b8c6b87322-kube-api-access-4d7hl\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.775030 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-combined-ca-bundle\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.780318 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovs-rundir\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.780400 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77fws\" (UniqueName: \"kubernetes.io/projected/0626d429-6c5e-4712-8e88-06f3efd0c2f4-kube-api-access-77fws\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.780434 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-config\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.780489 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.857762 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-khsnm"] Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.888844 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovs-rundir\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.888943 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77fws\" (UniqueName: \"kubernetes.io/projected/0626d429-6c5e-4712-8e88-06f3efd0c2f4-kube-api-access-77fws\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.888985 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-config\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.889012 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.889058 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572c6985-85a2-4a6d-8581-75b8c6b87322-config\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.889075 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.889119 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-dns-svc\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.889135 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovn-rundir\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.889157 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7hl\" (UniqueName: \"kubernetes.io/projected/572c6985-85a2-4a6d-8581-75b8c6b87322-kube-api-access-4d7hl\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.889181 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-combined-ca-bundle\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.891991 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572c6985-85a2-4a6d-8581-75b8c6b87322-config\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.892352 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovs-rundir\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.893351 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-config\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.895760 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-dns-svc\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.896082 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovn-rundir\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.896412 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.902667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-combined-ca-bundle\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.923567 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.924105 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7hl\" (UniqueName: \"kubernetes.io/projected/572c6985-85a2-4a6d-8581-75b8c6b87322-kube-api-access-4d7hl\") pod \"ovn-controller-metrics-bnw77\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.932256 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77fws\" (UniqueName: \"kubernetes.io/projected/0626d429-6c5e-4712-8e88-06f3efd0c2f4-kube-api-access-77fws\") pod \"dnsmasq-dns-7878659675-sqsrk\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.938478 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-7t4wb"] Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.940392 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.949149 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.957235 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-7t4wb"] Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.987466 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.995083 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.995139 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.995242 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-dns-svc\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.995265 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-config\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:15 crc kubenswrapper[5017]: I0129 06:52:15.995347 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9wk7\" (UniqueName: \"kubernetes.io/projected/03be44d6-3f68-4a0b-9137-836e5545ae9f-kube-api-access-k9wk7\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.007224 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.117706 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9wk7\" (UniqueName: \"kubernetes.io/projected/03be44d6-3f68-4a0b-9137-836e5545ae9f-kube-api-access-k9wk7\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.122799 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.122895 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.123182 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-dns-svc\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.123211 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-config\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.124318 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-config\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.124413 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.125110 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-dns-svc\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.125050 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.147977 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9wk7\" (UniqueName: \"kubernetes.io/projected/03be44d6-3f68-4a0b-9137-836e5545ae9f-kube-api-access-k9wk7\") pod \"dnsmasq-dns-586b989cdc-7t4wb\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.181895 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.224162 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-dns-svc\") pod \"c8001b26-5c08-42b9-9d59-f4422d318af8\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.224272 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tssx\" (UniqueName: \"kubernetes.io/projected/c8001b26-5c08-42b9-9d59-f4422d318af8-kube-api-access-4tssx\") pod \"c8001b26-5c08-42b9-9d59-f4422d318af8\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.224345 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-config\") pod \"c8001b26-5c08-42b9-9d59-f4422d318af8\" (UID: \"c8001b26-5c08-42b9-9d59-f4422d318af8\") " Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.225113 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-config" (OuterVolumeSpecName: "config") pod "c8001b26-5c08-42b9-9d59-f4422d318af8" (UID: "c8001b26-5c08-42b9-9d59-f4422d318af8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.225490 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8001b26-5c08-42b9-9d59-f4422d318af8" (UID: "c8001b26-5c08-42b9-9d59-f4422d318af8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.228824 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8001b26-5c08-42b9-9d59-f4422d318af8-kube-api-access-4tssx" (OuterVolumeSpecName: "kube-api-access-4tssx") pod "c8001b26-5c08-42b9-9d59-f4422d318af8" (UID: "c8001b26-5c08-42b9-9d59-f4422d318af8"). InnerVolumeSpecName "kube-api-access-4tssx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.272159 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-jm24m" event={"ID":"c8001b26-5c08-42b9-9d59-f4422d318af8","Type":"ContainerDied","Data":"0477d5bec0de16d99bf717aa806141f3e7191eea49afd94d3bd158b6c1565ce5"} Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.272240 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jm24m" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.272507 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.307634 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.326169 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.326202 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8001b26-5c08-42b9-9d59-f4422d318af8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.326215 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tssx\" (UniqueName: \"kubernetes.io/projected/c8001b26-5c08-42b9-9d59-f4422d318af8-kube-api-access-4tssx\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.354694 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jm24m"] Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.359729 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jm24m"] Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.360153 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.362842 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.529123 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-config\") pod \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.529248 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-dns-svc\") pod \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.529367 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9df4t\" (UniqueName: \"kubernetes.io/projected/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-kube-api-access-9df4t\") pod \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\" (UID: \"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb\") " Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.529749 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-config" (OuterVolumeSpecName: "config") pod "8f8a4d22-67c6-4e51-9adb-6c96c07e01bb" (UID: "8f8a4d22-67c6-4e51-9adb-6c96c07e01bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.530293 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f8a4d22-67c6-4e51-9adb-6c96c07e01bb" (UID: "8f8a4d22-67c6-4e51-9adb-6c96c07e01bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.535215 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-kube-api-access-9df4t" (OuterVolumeSpecName: "kube-api-access-9df4t") pod "8f8a4d22-67c6-4e51-9adb-6c96c07e01bb" (UID: "8f8a4d22-67c6-4e51-9adb-6c96c07e01bb"). InnerVolumeSpecName "kube-api-access-9df4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.545033 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bnw77"] Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.602205 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.603976 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.607522 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mmqhr" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.607777 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.607915 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.608048 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.626934 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.631950 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9df4t\" (UniqueName: \"kubernetes.io/projected/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-kube-api-access-9df4t\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.632015 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.632033 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.660739 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-sqsrk"] Jan 29 06:52:16 crc kubenswrapper[5017]: W0129 06:52:16.662769 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0626d429_6c5e_4712_8e88_06f3efd0c2f4.slice/crio-06183f8bc6589f27f600e7415b4de2fc7c3619200563af7582877550f44b9ed1 WatchSource:0}: Error finding container 06183f8bc6589f27f600e7415b4de2fc7c3619200563af7582877550f44b9ed1: Status 404 returned error can't find the container with id 06183f8bc6589f27f600e7415b4de2fc7c3619200563af7582877550f44b9ed1 Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.734499 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.735809 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.736005 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-config\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.736036 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5qh2\" (UniqueName: \"kubernetes.io/projected/02965a93-9a4c-4118-a030-0271f53a61a1-kube-api-access-k5qh2\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.736082 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-scripts\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.736132 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.736252 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.837559 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.837860 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.838055 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-config\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.838131 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5qh2\" (UniqueName: \"kubernetes.io/projected/02965a93-9a4c-4118-a030-0271f53a61a1-kube-api-access-k5qh2\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.838231 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-scripts\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.838310 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.838418 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.838168 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.839187 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-config\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.840122 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-scripts\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.846709 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.847046 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.847125 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.856702 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5qh2\" (UniqueName: \"kubernetes.io/projected/02965a93-9a4c-4118-a030-0271f53a61a1-kube-api-access-k5qh2\") pod \"ovn-northd-0\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " pod="openstack/ovn-northd-0" Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.933731 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-7t4wb"] Jan 29 06:52:16 crc kubenswrapper[5017]: I0129 06:52:16.933819 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.259901 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 06:52:17 crc kubenswrapper[5017]: W0129 06:52:17.276304 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02965a93_9a4c_4118_a030_0271f53a61a1.slice/crio-533ac51d2177f27c0e78ae10a85fb56411098e1ca3096e2098cdf5b6a1f1ab71 WatchSource:0}: Error finding container 533ac51d2177f27c0e78ae10a85fb56411098e1ca3096e2098cdf5b6a1f1ab71: Status 404 returned error can't find the container with id 533ac51d2177f27c0e78ae10a85fb56411098e1ca3096e2098cdf5b6a1f1ab71 Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.282030 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-sqsrk" event={"ID":"0626d429-6c5e-4712-8e88-06f3efd0c2f4","Type":"ContainerStarted","Data":"06183f8bc6589f27f600e7415b4de2fc7c3619200563af7582877550f44b9ed1"} Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.284112 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" event={"ID":"03be44d6-3f68-4a0b-9137-836e5545ae9f","Type":"ContainerStarted","Data":"492ce0bffd2b4b3af98457a58850a52f46bf706d703c4f9e36ac0caee70cc5f4"} Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.286155 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bnw77" event={"ID":"572c6985-85a2-4a6d-8581-75b8c6b87322","Type":"ContainerStarted","Data":"5d0c3be2b0978eb8f6117644b27656fb862fbe247178603731b87946cc5367a6"} Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.286208 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bnw77" event={"ID":"572c6985-85a2-4a6d-8581-75b8c6b87322","Type":"ContainerStarted","Data":"6d0b0faeb9116bb5d02f252746b7a3998741e5b939a484e8c9e14bcb4d4f7969"} Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.288250 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" event={"ID":"8f8a4d22-67c6-4e51-9adb-6c96c07e01bb","Type":"ContainerDied","Data":"1904b3717edc8623392cfc7d026f50dceac73a77a3544a3425eb7bc8ae6160a9"} Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.288344 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-khsnm" Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.307769 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-bnw77" podStartSLOduration=2.30774821 podStartE2EDuration="2.30774821s" podCreationTimestamp="2026-01-29 06:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:52:17.306452428 +0000 UTC m=+1023.680900038" watchObservedRunningTime="2026-01-29 06:52:17.30774821 +0000 UTC m=+1023.682195820" Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.433297 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-khsnm"] Jan 29 06:52:17 crc kubenswrapper[5017]: I0129 06:52:17.442365 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-khsnm"] Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.301355 5017 generic.go:334] "Generic (PLEG): container finished" podID="0626d429-6c5e-4712-8e88-06f3efd0c2f4" containerID="c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d" exitCode=0 Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.301525 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-sqsrk" event={"ID":"0626d429-6c5e-4712-8e88-06f3efd0c2f4","Type":"ContainerDied","Data":"c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d"} Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.303651 5017 generic.go:334] "Generic (PLEG): container finished" podID="03be44d6-3f68-4a0b-9137-836e5545ae9f" containerID="ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e" exitCode=0 Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.303731 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" event={"ID":"03be44d6-3f68-4a0b-9137-836e5545ae9f","Type":"ContainerDied","Data":"ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e"} Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.306210 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"02965a93-9a4c-4118-a030-0271f53a61a1","Type":"ContainerStarted","Data":"533ac51d2177f27c0e78ae10a85fb56411098e1ca3096e2098cdf5b6a1f1ab71"} Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.350690 5017 generic.go:334] "Generic (PLEG): container finished" podID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" containerID="541eef9ed8a601fb50010147d0b92594e48943cf7dfddc1493041868a74ebb85" exitCode=0 Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.355297 5017 generic.go:334] "Generic (PLEG): container finished" podID="9af88cca-e43b-483d-beae-d6a56940aff7" containerID="3bbbc1fc8a72c66dc23676db10ea62a991b138c8e95fdb7d35a472153c5b43f7" exitCode=0 Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.375394 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f8a4d22-67c6-4e51-9adb-6c96c07e01bb" path="/var/lib/kubelet/pods/8f8a4d22-67c6-4e51-9adb-6c96c07e01bb/volumes" Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.375991 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8001b26-5c08-42b9-9d59-f4422d318af8" path="/var/lib/kubelet/pods/c8001b26-5c08-42b9-9d59-f4422d318af8/volumes" Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.376458 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec5c09bc-f98c-4587-b4e3-ec9269c04a71","Type":"ContainerDied","Data":"541eef9ed8a601fb50010147d0b92594e48943cf7dfddc1493041868a74ebb85"} Jan 29 06:52:18 crc kubenswrapper[5017]: I0129 06:52:18.376505 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af88cca-e43b-483d-beae-d6a56940aff7","Type":"ContainerDied","Data":"3bbbc1fc8a72c66dc23676db10ea62a991b138c8e95fdb7d35a472153c5b43f7"} Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.368316 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af88cca-e43b-483d-beae-d6a56940aff7","Type":"ContainerStarted","Data":"ec537ea4d90113835bbfd7b41bd980ad15b540d9718e92b22b059584ee668478"} Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.370742 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-sqsrk" event={"ID":"0626d429-6c5e-4712-8e88-06f3efd0c2f4","Type":"ContainerStarted","Data":"4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195"} Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.371208 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.373396 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" event={"ID":"03be44d6-3f68-4a0b-9137-836e5545ae9f","Type":"ContainerStarted","Data":"88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7"} Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.373730 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.376328 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"02965a93-9a4c-4118-a030-0271f53a61a1","Type":"ContainerStarted","Data":"2d5d0c8760d913b9ab3eeaa636e31dd474ed2dad3b92862aa8e197299b972bee"} Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.376376 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"02965a93-9a4c-4118-a030-0271f53a61a1","Type":"ContainerStarted","Data":"5856b1139cb9ac964f5c7a59c84bd77817a7664b0ef60849ed25c5857c226d37"} Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.376454 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.381018 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec5c09bc-f98c-4587-b4e3-ec9269c04a71","Type":"ContainerStarted","Data":"0762eb515121f428ad670c99dbbc9df148f038b1dfcf835e5b82100fdb4c0a75"} Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.402639 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.792862031 podStartE2EDuration="44.402619305s" podCreationTimestamp="2026-01-29 06:51:35 +0000 UTC" firstStartedPulling="2026-01-29 06:51:37.715485966 +0000 UTC m=+984.089933576" lastFinishedPulling="2026-01-29 06:52:13.32524321 +0000 UTC m=+1019.699690850" observedRunningTime="2026-01-29 06:52:19.398015052 +0000 UTC m=+1025.772462692" watchObservedRunningTime="2026-01-29 06:52:19.402619305 +0000 UTC m=+1025.777066915" Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.426308 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7878659675-sqsrk" podStartSLOduration=3.956742872 podStartE2EDuration="4.426281235s" podCreationTimestamp="2026-01-29 06:52:15 +0000 UTC" firstStartedPulling="2026-01-29 06:52:16.665203025 +0000 UTC m=+1023.039650635" lastFinishedPulling="2026-01-29 06:52:17.134741388 +0000 UTC m=+1023.509188998" observedRunningTime="2026-01-29 06:52:19.421194601 +0000 UTC m=+1025.795642291" watchObservedRunningTime="2026-01-29 06:52:19.426281235 +0000 UTC m=+1025.800728885" Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.449506 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" podStartSLOduration=4.023305626 podStartE2EDuration="4.449476785s" podCreationTimestamp="2026-01-29 06:52:15 +0000 UTC" firstStartedPulling="2026-01-29 06:52:16.916459166 +0000 UTC m=+1023.290906776" lastFinishedPulling="2026-01-29 06:52:17.342630325 +0000 UTC m=+1023.717077935" observedRunningTime="2026-01-29 06:52:19.446289146 +0000 UTC m=+1025.820736786" watchObservedRunningTime="2026-01-29 06:52:19.449476785 +0000 UTC m=+1025.823924435" Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.471044 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371993.383768 podStartE2EDuration="43.471008282s" podCreationTimestamp="2026-01-29 06:51:36 +0000 UTC" firstStartedPulling="2026-01-29 06:51:38.929675017 +0000 UTC m=+985.304122627" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:52:19.465026676 +0000 UTC m=+1025.839474276" watchObservedRunningTime="2026-01-29 06:52:19.471008282 +0000 UTC m=+1025.845455922" Jan 29 06:52:19 crc kubenswrapper[5017]: I0129 06:52:19.498144 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.187865 podStartE2EDuration="3.498118437s" podCreationTimestamp="2026-01-29 06:52:16 +0000 UTC" firstStartedPulling="2026-01-29 06:52:17.279440886 +0000 UTC m=+1023.653888496" lastFinishedPulling="2026-01-29 06:52:18.589694313 +0000 UTC m=+1024.964141933" observedRunningTime="2026-01-29 06:52:19.491598667 +0000 UTC m=+1025.866046297" watchObservedRunningTime="2026-01-29 06:52:19.498118437 +0000 UTC m=+1025.872566047" Jan 29 06:52:19 crc kubenswrapper[5017]: E0129 06:52:19.550167 5017 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:35120->38.102.83.154:45933: write tcp 38.102.83.154:35120->38.102.83.154:45933: write: broken pipe Jan 29 06:52:20 crc kubenswrapper[5017]: I0129 06:52:20.500654 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 06:52:23 crc kubenswrapper[5017]: I0129 06:52:23.704711 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 06:52:26 crc kubenswrapper[5017]: I0129 06:52:26.009221 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:26 crc kubenswrapper[5017]: I0129 06:52:26.310143 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:26 crc kubenswrapper[5017]: I0129 06:52:26.366940 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-sqsrk"] Jan 29 06:52:26 crc kubenswrapper[5017]: I0129 06:52:26.454619 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7878659675-sqsrk" podUID="0626d429-6c5e-4712-8e88-06f3efd0c2f4" containerName="dnsmasq-dns" containerID="cri-o://4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195" gracePeriod=10 Jan 29 06:52:26 crc kubenswrapper[5017]: I0129 06:52:26.883373 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.024890 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.024972 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.057108 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-dns-svc\") pod \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.057178 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-ovsdbserver-nb\") pod \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.057301 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-config\") pod \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.057429 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77fws\" (UniqueName: \"kubernetes.io/projected/0626d429-6c5e-4712-8e88-06f3efd0c2f4-kube-api-access-77fws\") pod \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\" (UID: \"0626d429-6c5e-4712-8e88-06f3efd0c2f4\") " Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.065192 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0626d429-6c5e-4712-8e88-06f3efd0c2f4-kube-api-access-77fws" (OuterVolumeSpecName: "kube-api-access-77fws") pod "0626d429-6c5e-4712-8e88-06f3efd0c2f4" (UID: "0626d429-6c5e-4712-8e88-06f3efd0c2f4"). InnerVolumeSpecName "kube-api-access-77fws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.115806 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0626d429-6c5e-4712-8e88-06f3efd0c2f4" (UID: "0626d429-6c5e-4712-8e88-06f3efd0c2f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.118659 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-config" (OuterVolumeSpecName: "config") pod "0626d429-6c5e-4712-8e88-06f3efd0c2f4" (UID: "0626d429-6c5e-4712-8e88-06f3efd0c2f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.122754 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0626d429-6c5e-4712-8e88-06f3efd0c2f4" (UID: "0626d429-6c5e-4712-8e88-06f3efd0c2f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.133977 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.160073 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.160126 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77fws\" (UniqueName: \"kubernetes.io/projected/0626d429-6c5e-4712-8e88-06f3efd0c2f4-kube-api-access-77fws\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.160140 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.160152 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626d429-6c5e-4712-8e88-06f3efd0c2f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.465154 5017 generic.go:334] "Generic (PLEG): container finished" podID="0626d429-6c5e-4712-8e88-06f3efd0c2f4" containerID="4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195" exitCode=0 Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.465389 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-sqsrk" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.465379 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-sqsrk" event={"ID":"0626d429-6c5e-4712-8e88-06f3efd0c2f4","Type":"ContainerDied","Data":"4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195"} Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.465492 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-sqsrk" event={"ID":"0626d429-6c5e-4712-8e88-06f3efd0c2f4","Type":"ContainerDied","Data":"06183f8bc6589f27f600e7415b4de2fc7c3619200563af7582877550f44b9ed1"} Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.465522 5017 scope.go:117] "RemoveContainer" containerID="4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.497026 5017 scope.go:117] "RemoveContainer" containerID="c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.506475 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-sqsrk"] Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.541569 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7878659675-sqsrk"] Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.547191 5017 scope.go:117] "RemoveContainer" containerID="4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195" Jan 29 06:52:27 crc kubenswrapper[5017]: E0129 06:52:27.547812 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195\": container with ID starting with 4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195 not found: ID does not exist" containerID="4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.547869 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195"} err="failed to get container status \"4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195\": rpc error: code = NotFound desc = could not find container \"4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195\": container with ID starting with 4e31cb3bae36a0382d7a0f2782bd1833cdae42e171b54727d16796a62fb77195 not found: ID does not exist" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.547914 5017 scope.go:117] "RemoveContainer" containerID="c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d" Jan 29 06:52:27 crc kubenswrapper[5017]: E0129 06:52:27.548415 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d\": container with ID starting with c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d not found: ID does not exist" containerID="c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.548494 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d"} err="failed to get container status \"c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d\": rpc error: code = NotFound desc = could not find container \"c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d\": container with ID starting with c95b489cda784c7b9116007b29f75650f045ed5fb8b8c6e7dc20adfba630b94d not found: ID does not exist" Jan 29 06:52:27 crc kubenswrapper[5017]: I0129 06:52:27.567725 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.271797 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-89zdg"] Jan 29 06:52:28 crc kubenswrapper[5017]: E0129 06:52:28.272383 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0626d429-6c5e-4712-8e88-06f3efd0c2f4" containerName="dnsmasq-dns" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.272404 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0626d429-6c5e-4712-8e88-06f3efd0c2f4" containerName="dnsmasq-dns" Jan 29 06:52:28 crc kubenswrapper[5017]: E0129 06:52:28.272427 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0626d429-6c5e-4712-8e88-06f3efd0c2f4" containerName="init" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.272435 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0626d429-6c5e-4712-8e88-06f3efd0c2f4" containerName="init" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.272655 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="0626d429-6c5e-4712-8e88-06f3efd0c2f4" containerName="dnsmasq-dns" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.273465 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.287902 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-89zdg"] Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.341795 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0626d429-6c5e-4712-8e88-06f3efd0c2f4" path="/var/lib/kubelet/pods/0626d429-6c5e-4712-8e88-06f3efd0c2f4/volumes" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.342493 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7be5-account-create-update-4lc27"] Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.343557 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.346232 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.359600 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7be5-account-create-update-4lc27"] Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.383615 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.384487 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8a2800-04ff-44da-866f-1c4cabfe809f-operator-scripts\") pod \"keystone-db-create-89zdg\" (UID: \"ba8a2800-04ff-44da-866f-1c4cabfe809f\") " pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.384660 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxfh\" (UniqueName: \"kubernetes.io/projected/ba8a2800-04ff-44da-866f-1c4cabfe809f-kube-api-access-5rxfh\") pod \"keystone-db-create-89zdg\" (UID: \"ba8a2800-04ff-44da-866f-1c4cabfe809f\") " pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.385265 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.477986 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.486142 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8a2800-04ff-44da-866f-1c4cabfe809f-operator-scripts\") pod \"keystone-db-create-89zdg\" (UID: \"ba8a2800-04ff-44da-866f-1c4cabfe809f\") " pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.486306 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj265\" (UniqueName: \"kubernetes.io/projected/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-kube-api-access-zj265\") pod \"keystone-7be5-account-create-update-4lc27\" (UID: \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\") " pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.486380 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rxfh\" (UniqueName: \"kubernetes.io/projected/ba8a2800-04ff-44da-866f-1c4cabfe809f-kube-api-access-5rxfh\") pod \"keystone-db-create-89zdg\" (UID: \"ba8a2800-04ff-44da-866f-1c4cabfe809f\") " pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.486413 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-operator-scripts\") pod \"keystone-7be5-account-create-update-4lc27\" (UID: \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\") " pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.487335 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8a2800-04ff-44da-866f-1c4cabfe809f-operator-scripts\") pod \"keystone-db-create-89zdg\" (UID: \"ba8a2800-04ff-44da-866f-1c4cabfe809f\") " pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.511344 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rxfh\" (UniqueName: \"kubernetes.io/projected/ba8a2800-04ff-44da-866f-1c4cabfe809f-kube-api-access-5rxfh\") pod \"keystone-db-create-89zdg\" (UID: \"ba8a2800-04ff-44da-866f-1c4cabfe809f\") " pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.589016 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-operator-scripts\") pod \"keystone-7be5-account-create-update-4lc27\" (UID: \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\") " pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.589244 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj265\" (UniqueName: \"kubernetes.io/projected/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-kube-api-access-zj265\") pod \"keystone-7be5-account-create-update-4lc27\" (UID: \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\") " pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.589849 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-operator-scripts\") pod \"keystone-7be5-account-create-update-4lc27\" (UID: \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\") " pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.605024 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4038-account-create-update-sv5xr"] Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.606573 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.606748 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.612489 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.616117 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wdd2j"] Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.617744 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj265\" (UniqueName: \"kubernetes.io/projected/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-kube-api-access-zj265\") pod \"keystone-7be5-account-create-update-4lc27\" (UID: \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\") " pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.633029 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4038-account-create-update-sv5xr"] Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.633169 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.639035 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wdd2j"] Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.664124 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.692845 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-operator-scripts\") pod \"placement-4038-account-create-update-sv5xr\" (UID: \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\") " pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.693011 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf8rp\" (UniqueName: \"kubernetes.io/projected/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-kube-api-access-gf8rp\") pod \"placement-db-create-wdd2j\" (UID: \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\") " pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.693044 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-operator-scripts\") pod \"placement-db-create-wdd2j\" (UID: \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\") " pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.693398 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbb6\" (UniqueName: \"kubernetes.io/projected/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-kube-api-access-nlbb6\") pod \"placement-4038-account-create-update-sv5xr\" (UID: \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\") " pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.800561 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlbb6\" (UniqueName: \"kubernetes.io/projected/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-kube-api-access-nlbb6\") pod \"placement-4038-account-create-update-sv5xr\" (UID: \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\") " pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.800652 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-operator-scripts\") pod \"placement-4038-account-create-update-sv5xr\" (UID: \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\") " pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.800680 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf8rp\" (UniqueName: \"kubernetes.io/projected/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-kube-api-access-gf8rp\") pod \"placement-db-create-wdd2j\" (UID: \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\") " pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.800708 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-operator-scripts\") pod \"placement-db-create-wdd2j\" (UID: \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\") " pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.801753 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-operator-scripts\") pod \"placement-db-create-wdd2j\" (UID: \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\") " pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.802280 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-operator-scripts\") pod \"placement-4038-account-create-update-sv5xr\" (UID: \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\") " pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.823593 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlbb6\" (UniqueName: \"kubernetes.io/projected/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-kube-api-access-nlbb6\") pod \"placement-4038-account-create-update-sv5xr\" (UID: \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\") " pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:28 crc kubenswrapper[5017]: I0129 06:52:28.823611 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf8rp\" (UniqueName: \"kubernetes.io/projected/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-kube-api-access-gf8rp\") pod \"placement-db-create-wdd2j\" (UID: \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\") " pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.072246 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.075055 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.110046 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-89zdg"] Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.214020 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7be5-account-create-update-4lc27"] Jan 29 06:52:29 crc kubenswrapper[5017]: W0129 06:52:29.297837 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fcb9744_2ad4_4c60_a132_0b9769b6b97a.slice/crio-a92fb1c08175eba55e5110f80d33e7d16a1c69659f378d641e36ac0b4c92a08d WatchSource:0}: Error finding container a92fb1c08175eba55e5110f80d33e7d16a1c69659f378d641e36ac0b4c92a08d: Status 404 returned error can't find the container with id a92fb1c08175eba55e5110f80d33e7d16a1c69659f378d641e36ac0b4c92a08d Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.495256 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7be5-account-create-update-4lc27" event={"ID":"7fcb9744-2ad4-4c60-a132-0b9769b6b97a","Type":"ContainerStarted","Data":"a92fb1c08175eba55e5110f80d33e7d16a1c69659f378d641e36ac0b4c92a08d"} Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.500584 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-89zdg" event={"ID":"ba8a2800-04ff-44da-866f-1c4cabfe809f","Type":"ContainerStarted","Data":"2f645a2b34a7b23c32b5ae9082fb6698bb9e0e0dd2c92e99e4f7cb9a5d290fe2"} Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.500657 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-89zdg" event={"ID":"ba8a2800-04ff-44da-866f-1c4cabfe809f","Type":"ContainerStarted","Data":"66bc282515f9aa26c84f5aae9eadb8bbff38016bb65e6a0c355f4a4531cd6baf"} Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.542232 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-89zdg" podStartSLOduration=1.542203974 podStartE2EDuration="1.542203974s" podCreationTimestamp="2026-01-29 06:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:52:29.532118967 +0000 UTC m=+1035.906566587" watchObservedRunningTime="2026-01-29 06:52:29.542203974 +0000 UTC m=+1035.916651584" Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.617914 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.644407 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wdd2j"] Jan 29 06:52:29 crc kubenswrapper[5017]: I0129 06:52:29.804649 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4038-account-create-update-sv5xr"] Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.519567 5017 generic.go:334] "Generic (PLEG): container finished" podID="e892cce7-8414-428d-a4f2-7aaff4b6bdd9" containerID="4968edbc67d199e151c3cfadae23c59a5bd0ef66ae75d9c98b233b354687e22a" exitCode=0 Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.520075 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wdd2j" event={"ID":"e892cce7-8414-428d-a4f2-7aaff4b6bdd9","Type":"ContainerDied","Data":"4968edbc67d199e151c3cfadae23c59a5bd0ef66ae75d9c98b233b354687e22a"} Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.520110 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wdd2j" event={"ID":"e892cce7-8414-428d-a4f2-7aaff4b6bdd9","Type":"ContainerStarted","Data":"2bf1e13a5505b7b6e90c25a1e8131cfee53847455023378502acf796a823288b"} Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.521987 5017 generic.go:334] "Generic (PLEG): container finished" podID="7fcb9744-2ad4-4c60-a132-0b9769b6b97a" containerID="d6de78aeb2689362859b99dec9103a593ca802a2f6c8ce748edc505d9f000880" exitCode=0 Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.522025 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7be5-account-create-update-4lc27" event={"ID":"7fcb9744-2ad4-4c60-a132-0b9769b6b97a","Type":"ContainerDied","Data":"d6de78aeb2689362859b99dec9103a593ca802a2f6c8ce748edc505d9f000880"} Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.523347 5017 generic.go:334] "Generic (PLEG): container finished" podID="ba8a2800-04ff-44da-866f-1c4cabfe809f" containerID="2f645a2b34a7b23c32b5ae9082fb6698bb9e0e0dd2c92e99e4f7cb9a5d290fe2" exitCode=0 Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.523415 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-89zdg" event={"ID":"ba8a2800-04ff-44da-866f-1c4cabfe809f","Type":"ContainerDied","Data":"2f645a2b34a7b23c32b5ae9082fb6698bb9e0e0dd2c92e99e4f7cb9a5d290fe2"} Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.524415 5017 generic.go:334] "Generic (PLEG): container finished" podID="c77a9d60-239f-4a28-b30a-6a9c4bdecb2b" containerID="7f718e4405e859db950088c39e7a46f62463a642095bd1f0bb62bcdf8b9c85bc" exitCode=0 Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.525487 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4038-account-create-update-sv5xr" event={"ID":"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b","Type":"ContainerDied","Data":"7f718e4405e859db950088c39e7a46f62463a642095bd1f0bb62bcdf8b9c85bc"} Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.525515 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4038-account-create-update-sv5xr" event={"ID":"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b","Type":"ContainerStarted","Data":"22ed775e25eb7057147b670f71bb307c26a3f1439b9161a9864f1322fa04da94"} Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.623092 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkl9j"] Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.629129 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.649488 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.649543 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.649571 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7mk\" (UniqueName: \"kubernetes.io/projected/3f100233-315d-4338-815e-8f12beaeaaae-kube-api-access-vx7mk\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.649631 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.649732 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-config\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.671400 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkl9j"] Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.752408 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-config\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.752495 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.752523 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.752544 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7mk\" (UniqueName: \"kubernetes.io/projected/3f100233-315d-4338-815e-8f12beaeaaae-kube-api-access-vx7mk\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.752564 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.753544 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.754142 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-config\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.754836 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.755177 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.792565 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7mk\" (UniqueName: \"kubernetes.io/projected/3f100233-315d-4338-815e-8f12beaeaaae-kube-api-access-vx7mk\") pod \"dnsmasq-dns-67fdf7998c-vkl9j\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:30 crc kubenswrapper[5017]: I0129 06:52:30.965973 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.470222 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkl9j"] Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.545674 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" event={"ID":"3f100233-315d-4338-815e-8f12beaeaaae","Type":"ContainerStarted","Data":"32e52e6b80bfde4a230f944fe4a3edecee8e079f5de667b479a866bc6a2f6ec9"} Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.766848 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.772617 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.774507 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9zv94" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.776780 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.776855 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.776929 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.852145 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.975197 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-lock\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.975266 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.975305 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.975348 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d082326-495c-4078-974e-714379243884-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.975415 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwrd\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-kube-api-access-nqwrd\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:31 crc kubenswrapper[5017]: I0129 06:52:31.975445 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-cache\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.031801 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.081878 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqwrd\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-kube-api-access-nqwrd\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.082008 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-cache\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.082767 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-cache\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.082855 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-lock\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.083229 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.083293 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.083353 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d082326-495c-4078-974e-714379243884-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.089204 5017 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.089251 5017 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.089335 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift podName:6d082326-495c-4078-974e-714379243884 nodeName:}" failed. No retries permitted until 2026-01-29 06:52:32.589303816 +0000 UTC m=+1038.963751426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift") pod "swift-storage-0" (UID: "6d082326-495c-4078-974e-714379243884") : configmap "swift-ring-files" not found Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.083174 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-lock\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.090007 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.118148 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d082326-495c-4078-974e-714379243884-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.129920 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqwrd\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-kube-api-access-nqwrd\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.164362 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.184810 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8a2800-04ff-44da-866f-1c4cabfe809f-operator-scripts\") pod \"ba8a2800-04ff-44da-866f-1c4cabfe809f\" (UID: \"ba8a2800-04ff-44da-866f-1c4cabfe809f\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.185039 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rxfh\" (UniqueName: \"kubernetes.io/projected/ba8a2800-04ff-44da-866f-1c4cabfe809f-kube-api-access-5rxfh\") pod \"ba8a2800-04ff-44da-866f-1c4cabfe809f\" (UID: \"ba8a2800-04ff-44da-866f-1c4cabfe809f\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.189604 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba8a2800-04ff-44da-866f-1c4cabfe809f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba8a2800-04ff-44da-866f-1c4cabfe809f" (UID: "ba8a2800-04ff-44da-866f-1c4cabfe809f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.200731 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8a2800-04ff-44da-866f-1c4cabfe809f-kube-api-access-5rxfh" (OuterVolumeSpecName: "kube-api-access-5rxfh") pod "ba8a2800-04ff-44da-866f-1c4cabfe809f" (UID: "ba8a2800-04ff-44da-866f-1c4cabfe809f"). InnerVolumeSpecName "kube-api-access-5rxfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.266552 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.278210 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.287739 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba8a2800-04ff-44da-866f-1c4cabfe809f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.288162 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rxfh\" (UniqueName: \"kubernetes.io/projected/ba8a2800-04ff-44da-866f-1c4cabfe809f-kube-api-access-5rxfh\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.316402 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.375036 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qfxk8"] Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.375571 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e892cce7-8414-428d-a4f2-7aaff4b6bdd9" containerName="mariadb-database-create" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.375592 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e892cce7-8414-428d-a4f2-7aaff4b6bdd9" containerName="mariadb-database-create" Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.375619 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fcb9744-2ad4-4c60-a132-0b9769b6b97a" containerName="mariadb-account-create-update" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.375628 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcb9744-2ad4-4c60-a132-0b9769b6b97a" containerName="mariadb-account-create-update" Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.375653 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8a2800-04ff-44da-866f-1c4cabfe809f" containerName="mariadb-database-create" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.375660 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8a2800-04ff-44da-866f-1c4cabfe809f" containerName="mariadb-database-create" Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.375685 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77a9d60-239f-4a28-b30a-6a9c4bdecb2b" containerName="mariadb-account-create-update" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.375691 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77a9d60-239f-4a28-b30a-6a9c4bdecb2b" containerName="mariadb-account-create-update" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.375880 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fcb9744-2ad4-4c60-a132-0b9769b6b97a" containerName="mariadb-account-create-update" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.375898 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77a9d60-239f-4a28-b30a-6a9c4bdecb2b" containerName="mariadb-account-create-update" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.375914 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8a2800-04ff-44da-866f-1c4cabfe809f" containerName="mariadb-database-create" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.375929 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e892cce7-8414-428d-a4f2-7aaff4b6bdd9" containerName="mariadb-database-create" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.376628 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.383266 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.383875 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.387224 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.390653 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj265\" (UniqueName: \"kubernetes.io/projected/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-kube-api-access-zj265\") pod \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\" (UID: \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.390851 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlbb6\" (UniqueName: \"kubernetes.io/projected/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-kube-api-access-nlbb6\") pod \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\" (UID: \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.391018 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-operator-scripts\") pod \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\" (UID: \"7fcb9744-2ad4-4c60-a132-0b9769b6b97a\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.391059 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-operator-scripts\") pod \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\" (UID: \"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.391088 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-operator-scripts\") pod \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\" (UID: \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.391114 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf8rp\" (UniqueName: \"kubernetes.io/projected/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-kube-api-access-gf8rp\") pod \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\" (UID: \"e892cce7-8414-428d-a4f2-7aaff4b6bdd9\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.391789 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fcb9744-2ad4-4c60-a132-0b9769b6b97a" (UID: "7fcb9744-2ad4-4c60-a132-0b9769b6b97a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.392342 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e892cce7-8414-428d-a4f2-7aaff4b6bdd9" (UID: "e892cce7-8414-428d-a4f2-7aaff4b6bdd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.392853 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c77a9d60-239f-4a28-b30a-6a9c4bdecb2b" (UID: "c77a9d60-239f-4a28-b30a-6a9c4bdecb2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.397449 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-kube-api-access-zj265" (OuterVolumeSpecName: "kube-api-access-zj265") pod "7fcb9744-2ad4-4c60-a132-0b9769b6b97a" (UID: "7fcb9744-2ad4-4c60-a132-0b9769b6b97a"). InnerVolumeSpecName "kube-api-access-zj265". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.399101 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-kube-api-access-gf8rp" (OuterVolumeSpecName: "kube-api-access-gf8rp") pod "e892cce7-8414-428d-a4f2-7aaff4b6bdd9" (UID: "e892cce7-8414-428d-a4f2-7aaff4b6bdd9"). InnerVolumeSpecName "kube-api-access-gf8rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.400292 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-kube-api-access-nlbb6" (OuterVolumeSpecName: "kube-api-access-nlbb6") pod "c77a9d60-239f-4a28-b30a-6a9c4bdecb2b" (UID: "c77a9d60-239f-4a28-b30a-6a9c4bdecb2b"). InnerVolumeSpecName "kube-api-access-nlbb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.409071 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qfxk8"] Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.425077 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-qfxk8"] Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.425917 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-dvg74 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-dvg74 ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-qfxk8" podUID="a81bb985-4565-4aaa-b521-8d63d7d158af" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.429733 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-fc5n9"] Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.431305 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.455117 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fc5n9"] Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496201 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-ring-data-devices\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496286 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4991fdbc-2d83-45dd-91a3-b312347ff317-etc-swift\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496318 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-scripts\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496342 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-combined-ca-bundle\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496366 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-combined-ca-bundle\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496388 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvg74\" (UniqueName: \"kubernetes.io/projected/a81bb985-4565-4aaa-b521-8d63d7d158af-kube-api-access-dvg74\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496411 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-scripts\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496437 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-swiftconf\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496457 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-swiftconf\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496489 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a81bb985-4565-4aaa-b521-8d63d7d158af-etc-swift\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496517 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxlr8\" (UniqueName: \"kubernetes.io/projected/4991fdbc-2d83-45dd-91a3-b312347ff317-kube-api-access-kxlr8\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496544 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-dispersionconf\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496574 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-dispersionconf\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496593 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-ring-data-devices\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496653 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlbb6\" (UniqueName: \"kubernetes.io/projected/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-kube-api-access-nlbb6\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496667 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496677 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496687 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496697 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf8rp\" (UniqueName: \"kubernetes.io/projected/e892cce7-8414-428d-a4f2-7aaff4b6bdd9-kube-api-access-gf8rp\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.496706 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj265\" (UniqueName: \"kubernetes.io/projected/7fcb9744-2ad4-4c60-a132-0b9769b6b97a-kube-api-access-zj265\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.571587 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7be5-account-create-update-4lc27" event={"ID":"7fcb9744-2ad4-4c60-a132-0b9769b6b97a","Type":"ContainerDied","Data":"a92fb1c08175eba55e5110f80d33e7d16a1c69659f378d641e36ac0b4c92a08d"} Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.572063 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92fb1c08175eba55e5110f80d33e7d16a1c69659f378d641e36ac0b4c92a08d" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.571626 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7be5-account-create-update-4lc27" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.583094 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-89zdg" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.584257 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-89zdg" event={"ID":"ba8a2800-04ff-44da-866f-1c4cabfe809f","Type":"ContainerDied","Data":"66bc282515f9aa26c84f5aae9eadb8bbff38016bb65e6a0c355f4a4531cd6baf"} Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.584321 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66bc282515f9aa26c84f5aae9eadb8bbff38016bb65e6a0c355f4a4531cd6baf" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.587918 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4038-account-create-update-sv5xr" event={"ID":"c77a9d60-239f-4a28-b30a-6a9c4bdecb2b","Type":"ContainerDied","Data":"22ed775e25eb7057147b670f71bb307c26a3f1439b9161a9864f1322fa04da94"} Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.587949 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ed775e25eb7057147b670f71bb307c26a3f1439b9161a9864f1322fa04da94" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.588066 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4038-account-create-update-sv5xr" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.590837 5017 generic.go:334] "Generic (PLEG): container finished" podID="3f100233-315d-4338-815e-8f12beaeaaae" containerID="0fbcb35f5e1442eb859e4c507b0be33fb72bc533f83d4e1a7af7b7a85ab697de" exitCode=0 Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.590909 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" event={"ID":"3f100233-315d-4338-815e-8f12beaeaaae","Type":"ContainerDied","Data":"0fbcb35f5e1442eb859e4c507b0be33fb72bc533f83d4e1a7af7b7a85ab697de"} Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.593647 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.593704 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdd2j" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.593741 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wdd2j" event={"ID":"e892cce7-8414-428d-a4f2-7aaff4b6bdd9","Type":"ContainerDied","Data":"2bf1e13a5505b7b6e90c25a1e8131cfee53847455023378502acf796a823288b"} Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.593763 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bf1e13a5505b7b6e90c25a1e8131cfee53847455023378502acf796a823288b" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600619 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a81bb985-4565-4aaa-b521-8d63d7d158af-etc-swift\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600665 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxlr8\" (UniqueName: \"kubernetes.io/projected/4991fdbc-2d83-45dd-91a3-b312347ff317-kube-api-access-kxlr8\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600699 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-dispersionconf\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600731 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-dispersionconf\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600756 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-ring-data-devices\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600804 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600834 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-ring-data-devices\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600860 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4991fdbc-2d83-45dd-91a3-b312347ff317-etc-swift\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600885 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-scripts\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600911 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-combined-ca-bundle\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600934 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-combined-ca-bundle\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600969 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvg74\" (UniqueName: \"kubernetes.io/projected/a81bb985-4565-4aaa-b521-8d63d7d158af-kube-api-access-dvg74\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.600991 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-scripts\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.601012 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-swiftconf\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.601030 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-swiftconf\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.601129 5017 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.601188 5017 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 06:52:32 crc kubenswrapper[5017]: E0129 06:52:32.601254 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift podName:6d082326-495c-4078-974e-714379243884 nodeName:}" failed. No retries permitted until 2026-01-29 06:52:33.601218128 +0000 UTC m=+1039.975665738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift") pod "swift-storage-0" (UID: "6d082326-495c-4078-974e-714379243884") : configmap "swift-ring-files" not found Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.601317 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a81bb985-4565-4aaa-b521-8d63d7d158af-etc-swift\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.602666 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-scripts\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.602864 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-scripts\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.601949 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-ring-data-devices\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.603673 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4991fdbc-2d83-45dd-91a3-b312347ff317-etc-swift\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.604927 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-ring-data-devices\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.605630 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-swiftconf\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.608032 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-combined-ca-bundle\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.608800 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-dispersionconf\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.610977 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-combined-ca-bundle\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.611936 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-swiftconf\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.612478 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-dispersionconf\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.613809 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.620144 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxlr8\" (UniqueName: \"kubernetes.io/projected/4991fdbc-2d83-45dd-91a3-b312347ff317-kube-api-access-kxlr8\") pod \"swift-ring-rebalance-fc5n9\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.620710 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvg74\" (UniqueName: \"kubernetes.io/projected/a81bb985-4565-4aaa-b521-8d63d7d158af-kube-api-access-dvg74\") pod \"swift-ring-rebalance-qfxk8\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.702352 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-scripts\") pod \"a81bb985-4565-4aaa-b521-8d63d7d158af\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.702410 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvg74\" (UniqueName: \"kubernetes.io/projected/a81bb985-4565-4aaa-b521-8d63d7d158af-kube-api-access-dvg74\") pod \"a81bb985-4565-4aaa-b521-8d63d7d158af\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.702538 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-ring-data-devices\") pod \"a81bb985-4565-4aaa-b521-8d63d7d158af\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.702594 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-dispersionconf\") pod \"a81bb985-4565-4aaa-b521-8d63d7d158af\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.702627 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-swiftconf\") pod \"a81bb985-4565-4aaa-b521-8d63d7d158af\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.702688 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-combined-ca-bundle\") pod \"a81bb985-4565-4aaa-b521-8d63d7d158af\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.702738 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a81bb985-4565-4aaa-b521-8d63d7d158af-etc-swift\") pod \"a81bb985-4565-4aaa-b521-8d63d7d158af\" (UID: \"a81bb985-4565-4aaa-b521-8d63d7d158af\") " Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.702945 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-scripts" (OuterVolumeSpecName: "scripts") pod "a81bb985-4565-4aaa-b521-8d63d7d158af" (UID: "a81bb985-4565-4aaa-b521-8d63d7d158af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.704474 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.704262 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81bb985-4565-4aaa-b521-8d63d7d158af-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a81bb985-4565-4aaa-b521-8d63d7d158af" (UID: "a81bb985-4565-4aaa-b521-8d63d7d158af"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.704906 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a81bb985-4565-4aaa-b521-8d63d7d158af" (UID: "a81bb985-4565-4aaa-b521-8d63d7d158af"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.707771 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81bb985-4565-4aaa-b521-8d63d7d158af-kube-api-access-dvg74" (OuterVolumeSpecName: "kube-api-access-dvg74") pod "a81bb985-4565-4aaa-b521-8d63d7d158af" (UID: "a81bb985-4565-4aaa-b521-8d63d7d158af"). InnerVolumeSpecName "kube-api-access-dvg74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.708227 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a81bb985-4565-4aaa-b521-8d63d7d158af" (UID: "a81bb985-4565-4aaa-b521-8d63d7d158af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.708761 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a81bb985-4565-4aaa-b521-8d63d7d158af" (UID: "a81bb985-4565-4aaa-b521-8d63d7d158af"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.710901 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a81bb985-4565-4aaa-b521-8d63d7d158af" (UID: "a81bb985-4565-4aaa-b521-8d63d7d158af"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.762322 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.806206 5017 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a81bb985-4565-4aaa-b521-8d63d7d158af-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.806255 5017 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.806268 5017 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.806277 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81bb985-4565-4aaa-b521-8d63d7d158af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.806292 5017 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a81bb985-4565-4aaa-b521-8d63d7d158af-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:32 crc kubenswrapper[5017]: I0129 06:52:32.806303 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvg74\" (UniqueName: \"kubernetes.io/projected/a81bb985-4565-4aaa-b521-8d63d7d158af-kube-api-access-dvg74\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.262410 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fc5n9"] Jan 29 06:52:33 crc kubenswrapper[5017]: W0129 06:52:33.270693 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4991fdbc_2d83_45dd_91a3_b312347ff317.slice/crio-9c663bf50ccd2dd17d93c053806918f93c57fda9a2bab7348ff9c942a9e43a62 WatchSource:0}: Error finding container 9c663bf50ccd2dd17d93c053806918f93c57fda9a2bab7348ff9c942a9e43a62: Status 404 returned error can't find the container with id 9c663bf50ccd2dd17d93c053806918f93c57fda9a2bab7348ff9c942a9e43a62 Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.604557 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" event={"ID":"3f100233-315d-4338-815e-8f12beaeaaae","Type":"ContainerStarted","Data":"fccc9561a7f3897ac77ff53249df9d30469e8ab1cc6f35cc649eef07cd4b5527"} Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.607171 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qfxk8" Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.608140 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fc5n9" event={"ID":"4991fdbc-2d83-45dd-91a3-b312347ff317","Type":"ContainerStarted","Data":"9c663bf50ccd2dd17d93c053806918f93c57fda9a2bab7348ff9c942a9e43a62"} Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.626454 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:33 crc kubenswrapper[5017]: E0129 06:52:33.626665 5017 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 06:52:33 crc kubenswrapper[5017]: E0129 06:52:33.627129 5017 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 06:52:33 crc kubenswrapper[5017]: E0129 06:52:33.627309 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift podName:6d082326-495c-4078-974e-714379243884 nodeName:}" failed. No retries permitted until 2026-01-29 06:52:35.627285217 +0000 UTC m=+1042.001732827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift") pod "swift-storage-0" (UID: "6d082326-495c-4078-974e-714379243884") : configmap "swift-ring-files" not found Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.663571 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-qfxk8"] Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.672656 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-qfxk8"] Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.863053 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-hbwpv"] Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.864179 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.876148 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hbwpv"] Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.939716 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b192039c-4ffa-451a-8149-e15c107ac8f2-operator-scripts\") pod \"glance-db-create-hbwpv\" (UID: \"b192039c-4ffa-451a-8149-e15c107ac8f2\") " pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.939948 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tlrp\" (UniqueName: \"kubernetes.io/projected/b192039c-4ffa-451a-8149-e15c107ac8f2-kube-api-access-6tlrp\") pod \"glance-db-create-hbwpv\" (UID: \"b192039c-4ffa-451a-8149-e15c107ac8f2\") " pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.973327 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a25d-account-create-update-fzx89"] Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.975590 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.977659 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 06:52:33 crc kubenswrapper[5017]: I0129 06:52:33.986214 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a25d-account-create-update-fzx89"] Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.042113 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-operator-scripts\") pod \"glance-a25d-account-create-update-fzx89\" (UID: \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\") " pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.042374 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tlrp\" (UniqueName: \"kubernetes.io/projected/b192039c-4ffa-451a-8149-e15c107ac8f2-kube-api-access-6tlrp\") pod \"glance-db-create-hbwpv\" (UID: \"b192039c-4ffa-451a-8149-e15c107ac8f2\") " pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.042466 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njn9j\" (UniqueName: \"kubernetes.io/projected/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-kube-api-access-njn9j\") pod \"glance-a25d-account-create-update-fzx89\" (UID: \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\") " pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.042651 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b192039c-4ffa-451a-8149-e15c107ac8f2-operator-scripts\") pod \"glance-db-create-hbwpv\" (UID: \"b192039c-4ffa-451a-8149-e15c107ac8f2\") " pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.044206 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b192039c-4ffa-451a-8149-e15c107ac8f2-operator-scripts\") pod \"glance-db-create-hbwpv\" (UID: \"b192039c-4ffa-451a-8149-e15c107ac8f2\") " pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.063703 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tlrp\" (UniqueName: \"kubernetes.io/projected/b192039c-4ffa-451a-8149-e15c107ac8f2-kube-api-access-6tlrp\") pod \"glance-db-create-hbwpv\" (UID: \"b192039c-4ffa-451a-8149-e15c107ac8f2\") " pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.145041 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njn9j\" (UniqueName: \"kubernetes.io/projected/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-kube-api-access-njn9j\") pod \"glance-a25d-account-create-update-fzx89\" (UID: \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\") " pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.145560 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-operator-scripts\") pod \"glance-a25d-account-create-update-fzx89\" (UID: \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\") " pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.146406 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-operator-scripts\") pod \"glance-a25d-account-create-update-fzx89\" (UID: \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\") " pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.164593 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njn9j\" (UniqueName: \"kubernetes.io/projected/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-kube-api-access-njn9j\") pod \"glance-a25d-account-create-update-fzx89\" (UID: \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\") " pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.182592 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.293701 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.391758 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81bb985-4565-4aaa-b521-8d63d7d158af" path="/var/lib/kubelet/pods/a81bb985-4565-4aaa-b521-8d63d7d158af/volumes" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.627728 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.708891 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" podStartSLOduration=4.708777405 podStartE2EDuration="4.708777405s" podCreationTimestamp="2026-01-29 06:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:52:34.70002613 +0000 UTC m=+1041.074473740" watchObservedRunningTime="2026-01-29 06:52:34.708777405 +0000 UTC m=+1041.083225015" Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.738342 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a25d-account-create-update-fzx89"] Jan 29 06:52:34 crc kubenswrapper[5017]: W0129 06:52:34.745600 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93ce45d0_5d2b_42bf_8601_da8dbce0d3da.slice/crio-09ead01b0913a02035e44d64c0e0e3d9526cc02e3c7404bc6f1cf7390f4e07d4 WatchSource:0}: Error finding container 09ead01b0913a02035e44d64c0e0e3d9526cc02e3c7404bc6f1cf7390f4e07d4: Status 404 returned error can't find the container with id 09ead01b0913a02035e44d64c0e0e3d9526cc02e3c7404bc6f1cf7390f4e07d4 Jan 29 06:52:34 crc kubenswrapper[5017]: I0129 06:52:34.790389 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hbwpv"] Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.653917 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cmvrz"] Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.657440 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.662870 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cmvrz"] Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.679304 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.681866 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:35 crc kubenswrapper[5017]: E0129 06:52:35.682136 5017 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 06:52:35 crc kubenswrapper[5017]: E0129 06:52:35.682216 5017 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 06:52:35 crc kubenswrapper[5017]: E0129 06:52:35.682362 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift podName:6d082326-495c-4078-974e-714379243884 nodeName:}" failed. No retries permitted until 2026-01-29 06:52:39.682296585 +0000 UTC m=+1046.056744195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift") pod "swift-storage-0" (UID: "6d082326-495c-4078-974e-714379243884") : configmap "swift-ring-files" not found Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.699711 5017 generic.go:334] "Generic (PLEG): container finished" podID="b192039c-4ffa-451a-8149-e15c107ac8f2" containerID="05b0917387915cab5bed915929f39d6918f89117f895626caf87657510319aca" exitCode=0 Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.699791 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hbwpv" event={"ID":"b192039c-4ffa-451a-8149-e15c107ac8f2","Type":"ContainerDied","Data":"05b0917387915cab5bed915929f39d6918f89117f895626caf87657510319aca"} Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.699825 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hbwpv" event={"ID":"b192039c-4ffa-451a-8149-e15c107ac8f2","Type":"ContainerStarted","Data":"fa7b96e506177a47b74864ddf37350d2f4812db36895a5a8b3216e889116b12d"} Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.702662 5017 generic.go:334] "Generic (PLEG): container finished" podID="93ce45d0-5d2b-42bf-8601-da8dbce0d3da" containerID="b62cfb03118814e317118c0c2e8d9bf6c156cdb23506386fa46d085f58c451c9" exitCode=0 Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.703512 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a25d-account-create-update-fzx89" event={"ID":"93ce45d0-5d2b-42bf-8601-da8dbce0d3da","Type":"ContainerDied","Data":"b62cfb03118814e317118c0c2e8d9bf6c156cdb23506386fa46d085f58c451c9"} Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.703537 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a25d-account-create-update-fzx89" event={"ID":"93ce45d0-5d2b-42bf-8601-da8dbce0d3da","Type":"ContainerStarted","Data":"09ead01b0913a02035e44d64c0e0e3d9526cc02e3c7404bc6f1cf7390f4e07d4"} Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.784119 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de138628-d2b3-44ef-8043-b9aaf8f11615-operator-scripts\") pod \"root-account-create-update-cmvrz\" (UID: \"de138628-d2b3-44ef-8043-b9aaf8f11615\") " pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.784195 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n68k8\" (UniqueName: \"kubernetes.io/projected/de138628-d2b3-44ef-8043-b9aaf8f11615-kube-api-access-n68k8\") pod \"root-account-create-update-cmvrz\" (UID: \"de138628-d2b3-44ef-8043-b9aaf8f11615\") " pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.885916 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n68k8\" (UniqueName: \"kubernetes.io/projected/de138628-d2b3-44ef-8043-b9aaf8f11615-kube-api-access-n68k8\") pod \"root-account-create-update-cmvrz\" (UID: \"de138628-d2b3-44ef-8043-b9aaf8f11615\") " pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.886539 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de138628-d2b3-44ef-8043-b9aaf8f11615-operator-scripts\") pod \"root-account-create-update-cmvrz\" (UID: \"de138628-d2b3-44ef-8043-b9aaf8f11615\") " pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.892370 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de138628-d2b3-44ef-8043-b9aaf8f11615-operator-scripts\") pod \"root-account-create-update-cmvrz\" (UID: \"de138628-d2b3-44ef-8043-b9aaf8f11615\") " pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:35 crc kubenswrapper[5017]: I0129 06:52:35.910101 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n68k8\" (UniqueName: \"kubernetes.io/projected/de138628-d2b3-44ef-8043-b9aaf8f11615-kube-api-access-n68k8\") pod \"root-account-create-update-cmvrz\" (UID: \"de138628-d2b3-44ef-8043-b9aaf8f11615\") " pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:36 crc kubenswrapper[5017]: I0129 06:52:36.008377 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:37 crc kubenswrapper[5017]: I0129 06:52:37.038763 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.130241 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.240723 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b192039c-4ffa-451a-8149-e15c107ac8f2-operator-scripts\") pod \"b192039c-4ffa-451a-8149-e15c107ac8f2\" (UID: \"b192039c-4ffa-451a-8149-e15c107ac8f2\") " Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.240878 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tlrp\" (UniqueName: \"kubernetes.io/projected/b192039c-4ffa-451a-8149-e15c107ac8f2-kube-api-access-6tlrp\") pod \"b192039c-4ffa-451a-8149-e15c107ac8f2\" (UID: \"b192039c-4ffa-451a-8149-e15c107ac8f2\") " Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.249832 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b192039c-4ffa-451a-8149-e15c107ac8f2-kube-api-access-6tlrp" (OuterVolumeSpecName: "kube-api-access-6tlrp") pod "b192039c-4ffa-451a-8149-e15c107ac8f2" (UID: "b192039c-4ffa-451a-8149-e15c107ac8f2"). InnerVolumeSpecName "kube-api-access-6tlrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.257638 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b192039c-4ffa-451a-8149-e15c107ac8f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b192039c-4ffa-451a-8149-e15c107ac8f2" (UID: "b192039c-4ffa-451a-8149-e15c107ac8f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.285782 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.345620 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njn9j\" (UniqueName: \"kubernetes.io/projected/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-kube-api-access-njn9j\") pod \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\" (UID: \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\") " Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.345903 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-operator-scripts\") pod \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\" (UID: \"93ce45d0-5d2b-42bf-8601-da8dbce0d3da\") " Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.347207 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tlrp\" (UniqueName: \"kubernetes.io/projected/b192039c-4ffa-451a-8149-e15c107ac8f2-kube-api-access-6tlrp\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.347223 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b192039c-4ffa-451a-8149-e15c107ac8f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.352317 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93ce45d0-5d2b-42bf-8601-da8dbce0d3da" (UID: "93ce45d0-5d2b-42bf-8601-da8dbce0d3da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.364259 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-kube-api-access-njn9j" (OuterVolumeSpecName: "kube-api-access-njn9j") pod "93ce45d0-5d2b-42bf-8601-da8dbce0d3da" (UID: "93ce45d0-5d2b-42bf-8601-da8dbce0d3da"). InnerVolumeSpecName "kube-api-access-njn9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.451427 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njn9j\" (UniqueName: \"kubernetes.io/projected/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-kube-api-access-njn9j\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.451476 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93ce45d0-5d2b-42bf-8601-da8dbce0d3da-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.750170 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a25d-account-create-update-fzx89" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.750214 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a25d-account-create-update-fzx89" event={"ID":"93ce45d0-5d2b-42bf-8601-da8dbce0d3da","Type":"ContainerDied","Data":"09ead01b0913a02035e44d64c0e0e3d9526cc02e3c7404bc6f1cf7390f4e07d4"} Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.750319 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ead01b0913a02035e44d64c0e0e3d9526cc02e3c7404bc6f1cf7390f4e07d4" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.752300 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cmvrz"] Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.753386 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hbwpv" event={"ID":"b192039c-4ffa-451a-8149-e15c107ac8f2","Type":"ContainerDied","Data":"fa7b96e506177a47b74864ddf37350d2f4812db36895a5a8b3216e889116b12d"} Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.753443 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa7b96e506177a47b74864ddf37350d2f4812db36895a5a8b3216e889116b12d" Jan 29 06:52:38 crc kubenswrapper[5017]: I0129 06:52:38.753459 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hbwpv" Jan 29 06:52:38 crc kubenswrapper[5017]: W0129 06:52:38.778270 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde138628_d2b3_44ef_8043_b9aaf8f11615.slice/crio-4fed626273089afc06780cc18d333b4e14156794bdeba6d52e91ce042c0881f5 WatchSource:0}: Error finding container 4fed626273089afc06780cc18d333b4e14156794bdeba6d52e91ce042c0881f5: Status 404 returned error can't find the container with id 4fed626273089afc06780cc18d333b4e14156794bdeba6d52e91ce042c0881f5 Jan 29 06:52:39 crc kubenswrapper[5017]: I0129 06:52:39.780895 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:39 crc kubenswrapper[5017]: E0129 06:52:39.781142 5017 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 06:52:39 crc kubenswrapper[5017]: E0129 06:52:39.781539 5017 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 06:52:39 crc kubenswrapper[5017]: E0129 06:52:39.781634 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift podName:6d082326-495c-4078-974e-714379243884 nodeName:}" failed. No retries permitted until 2026-01-29 06:52:47.781606528 +0000 UTC m=+1054.156054148 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift") pod "swift-storage-0" (UID: "6d082326-495c-4078-974e-714379243884") : configmap "swift-ring-files" not found Jan 29 06:52:39 crc kubenswrapper[5017]: I0129 06:52:39.783661 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fc5n9" event={"ID":"4991fdbc-2d83-45dd-91a3-b312347ff317","Type":"ContainerStarted","Data":"d6a842890092872bc99ef3cc4b7a0a17a34027b3b1b59d333f9b2250a24984af"} Jan 29 06:52:39 crc kubenswrapper[5017]: I0129 06:52:39.787259 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cmvrz" event={"ID":"de138628-d2b3-44ef-8043-b9aaf8f11615","Type":"ContainerDied","Data":"d1aae7294767caefa69112c93ebe589a451ce29b834f11fdb8bd104f755b2457"} Jan 29 06:52:39 crc kubenswrapper[5017]: I0129 06:52:39.787984 5017 generic.go:334] "Generic (PLEG): container finished" podID="de138628-d2b3-44ef-8043-b9aaf8f11615" containerID="d1aae7294767caefa69112c93ebe589a451ce29b834f11fdb8bd104f755b2457" exitCode=0 Jan 29 06:52:39 crc kubenswrapper[5017]: I0129 06:52:39.788080 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cmvrz" event={"ID":"de138628-d2b3-44ef-8043-b9aaf8f11615","Type":"ContainerStarted","Data":"4fed626273089afc06780cc18d333b4e14156794bdeba6d52e91ce042c0881f5"} Jan 29 06:52:39 crc kubenswrapper[5017]: I0129 06:52:39.813386 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-fc5n9" podStartSLOduration=2.965563102 podStartE2EDuration="7.813358137s" podCreationTimestamp="2026-01-29 06:52:32 +0000 UTC" firstStartedPulling="2026-01-29 06:52:33.27582965 +0000 UTC m=+1039.650277260" lastFinishedPulling="2026-01-29 06:52:38.123624685 +0000 UTC m=+1044.498072295" observedRunningTime="2026-01-29 06:52:39.809757008 +0000 UTC m=+1046.184204618" watchObservedRunningTime="2026-01-29 06:52:39.813358137 +0000 UTC m=+1046.187805747" Jan 29 06:52:40 crc kubenswrapper[5017]: I0129 06:52:40.968162 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.036564 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-7t4wb"] Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.036980 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" podUID="03be44d6-3f68-4a0b-9137-836e5545ae9f" containerName="dnsmasq-dns" containerID="cri-o://88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7" gracePeriod=10 Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.220445 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.309945 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" podUID="03be44d6-3f68-4a0b-9137-836e5545ae9f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.312196 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de138628-d2b3-44ef-8043-b9aaf8f11615-operator-scripts\") pod \"de138628-d2b3-44ef-8043-b9aaf8f11615\" (UID: \"de138628-d2b3-44ef-8043-b9aaf8f11615\") " Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.312285 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n68k8\" (UniqueName: \"kubernetes.io/projected/de138628-d2b3-44ef-8043-b9aaf8f11615-kube-api-access-n68k8\") pod \"de138628-d2b3-44ef-8043-b9aaf8f11615\" (UID: \"de138628-d2b3-44ef-8043-b9aaf8f11615\") " Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.314022 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de138628-d2b3-44ef-8043-b9aaf8f11615-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de138628-d2b3-44ef-8043-b9aaf8f11615" (UID: "de138628-d2b3-44ef-8043-b9aaf8f11615"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.321781 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de138628-d2b3-44ef-8043-b9aaf8f11615-kube-api-access-n68k8" (OuterVolumeSpecName: "kube-api-access-n68k8") pod "de138628-d2b3-44ef-8043-b9aaf8f11615" (UID: "de138628-d2b3-44ef-8043-b9aaf8f11615"). InnerVolumeSpecName "kube-api-access-n68k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.414952 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de138628-d2b3-44ef-8043-b9aaf8f11615-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.415016 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n68k8\" (UniqueName: \"kubernetes.io/projected/de138628-d2b3-44ef-8043-b9aaf8f11615-kube-api-access-n68k8\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.625485 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.722986 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9wk7\" (UniqueName: \"kubernetes.io/projected/03be44d6-3f68-4a0b-9137-836e5545ae9f-kube-api-access-k9wk7\") pod \"03be44d6-3f68-4a0b-9137-836e5545ae9f\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.723087 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-dns-svc\") pod \"03be44d6-3f68-4a0b-9137-836e5545ae9f\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.723114 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-config\") pod \"03be44d6-3f68-4a0b-9137-836e5545ae9f\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.723241 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-nb\") pod \"03be44d6-3f68-4a0b-9137-836e5545ae9f\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.723280 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-sb\") pod \"03be44d6-3f68-4a0b-9137-836e5545ae9f\" (UID: \"03be44d6-3f68-4a0b-9137-836e5545ae9f\") " Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.729398 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03be44d6-3f68-4a0b-9137-836e5545ae9f-kube-api-access-k9wk7" (OuterVolumeSpecName: "kube-api-access-k9wk7") pod "03be44d6-3f68-4a0b-9137-836e5545ae9f" (UID: "03be44d6-3f68-4a0b-9137-836e5545ae9f"). InnerVolumeSpecName "kube-api-access-k9wk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.765071 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03be44d6-3f68-4a0b-9137-836e5545ae9f" (UID: "03be44d6-3f68-4a0b-9137-836e5545ae9f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.765176 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-config" (OuterVolumeSpecName: "config") pod "03be44d6-3f68-4a0b-9137-836e5545ae9f" (UID: "03be44d6-3f68-4a0b-9137-836e5545ae9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.769447 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03be44d6-3f68-4a0b-9137-836e5545ae9f" (UID: "03be44d6-3f68-4a0b-9137-836e5545ae9f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.780536 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03be44d6-3f68-4a0b-9137-836e5545ae9f" (UID: "03be44d6-3f68-4a0b-9137-836e5545ae9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.825885 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.825923 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.825938 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9wk7\" (UniqueName: \"kubernetes.io/projected/03be44d6-3f68-4a0b-9137-836e5545ae9f-kube-api-access-k9wk7\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.825967 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.825977 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03be44d6-3f68-4a0b-9137-836e5545ae9f-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.844179 5017 generic.go:334] "Generic (PLEG): container finished" podID="03be44d6-3f68-4a0b-9137-836e5545ae9f" containerID="88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7" exitCode=0 Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.844287 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.844289 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" event={"ID":"03be44d6-3f68-4a0b-9137-836e5545ae9f","Type":"ContainerDied","Data":"88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7"} Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.844435 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-7t4wb" event={"ID":"03be44d6-3f68-4a0b-9137-836e5545ae9f","Type":"ContainerDied","Data":"492ce0bffd2b4b3af98457a58850a52f46bf706d703c4f9e36ac0caee70cc5f4"} Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.844463 5017 scope.go:117] "RemoveContainer" containerID="88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.845811 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cmvrz" event={"ID":"de138628-d2b3-44ef-8043-b9aaf8f11615","Type":"ContainerDied","Data":"4fed626273089afc06780cc18d333b4e14156794bdeba6d52e91ce042c0881f5"} Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.845840 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fed626273089afc06780cc18d333b4e14156794bdeba6d52e91ce042c0881f5" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.845888 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cmvrz" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.907239 5017 scope.go:117] "RemoveContainer" containerID="ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.914901 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-7t4wb"] Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.937087 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-7t4wb"] Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.949171 5017 scope.go:117] "RemoveContainer" containerID="88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7" Jan 29 06:52:41 crc kubenswrapper[5017]: E0129 06:52:41.953049 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7\": container with ID starting with 88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7 not found: ID does not exist" containerID="88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.953090 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7"} err="failed to get container status \"88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7\": rpc error: code = NotFound desc = could not find container \"88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7\": container with ID starting with 88eef209623272dbe38674d64997ef421c0a3e91c4ad990fb61873b09e46a7d7 not found: ID does not exist" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.953119 5017 scope.go:117] "RemoveContainer" containerID="ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e" Jan 29 06:52:41 crc kubenswrapper[5017]: E0129 06:52:41.957027 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e\": container with ID starting with ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e not found: ID does not exist" containerID="ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e" Jan 29 06:52:41 crc kubenswrapper[5017]: I0129 06:52:41.957053 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e"} err="failed to get container status \"ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e\": rpc error: code = NotFound desc = could not find container \"ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e\": container with ID starting with ef002f1724c6d73e833aef38af5e6388e907913bb0a89c761c26353d5577253e not found: ID does not exist" Jan 29 06:52:42 crc kubenswrapper[5017]: I0129 06:52:42.327218 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03be44d6-3f68-4a0b-9137-836e5545ae9f" path="/var/lib/kubelet/pods/03be44d6-3f68-4a0b-9137-836e5545ae9f/volumes" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.649771 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rtkrb" podUID="4c57c864-37e8-46b9-b30d-1762f3858984" containerName="ovn-controller" probeResult="failure" output=< Jan 29 06:52:43 crc kubenswrapper[5017]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 06:52:43 crc kubenswrapper[5017]: > Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.689029 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.725524 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.869989 5017 generic.go:334] "Generic (PLEG): container finished" podID="d30b013f-453f-4282-8b22-2a5270027828" containerID="afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913" exitCode=0 Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.870066 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d30b013f-453f-4282-8b22-2a5270027828","Type":"ContainerDied","Data":"afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913"} Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.973525 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rtkrb-config-5mbld"] Jan 29 06:52:43 crc kubenswrapper[5017]: E0129 06:52:43.974588 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b192039c-4ffa-451a-8149-e15c107ac8f2" containerName="mariadb-database-create" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.974615 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b192039c-4ffa-451a-8149-e15c107ac8f2" containerName="mariadb-database-create" Jan 29 06:52:43 crc kubenswrapper[5017]: E0129 06:52:43.974646 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03be44d6-3f68-4a0b-9137-836e5545ae9f" containerName="init" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.974657 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="03be44d6-3f68-4a0b-9137-836e5545ae9f" containerName="init" Jan 29 06:52:43 crc kubenswrapper[5017]: E0129 06:52:43.974674 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de138628-d2b3-44ef-8043-b9aaf8f11615" containerName="mariadb-account-create-update" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.974683 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="de138628-d2b3-44ef-8043-b9aaf8f11615" containerName="mariadb-account-create-update" Jan 29 06:52:43 crc kubenswrapper[5017]: E0129 06:52:43.974694 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ce45d0-5d2b-42bf-8601-da8dbce0d3da" containerName="mariadb-account-create-update" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.974704 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ce45d0-5d2b-42bf-8601-da8dbce0d3da" containerName="mariadb-account-create-update" Jan 29 06:52:43 crc kubenswrapper[5017]: E0129 06:52:43.974723 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03be44d6-3f68-4a0b-9137-836e5545ae9f" containerName="dnsmasq-dns" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.974731 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="03be44d6-3f68-4a0b-9137-836e5545ae9f" containerName="dnsmasq-dns" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.974941 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b192039c-4ffa-451a-8149-e15c107ac8f2" containerName="mariadb-database-create" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.974974 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ce45d0-5d2b-42bf-8601-da8dbce0d3da" containerName="mariadb-account-create-update" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.974993 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="03be44d6-3f68-4a0b-9137-836e5545ae9f" containerName="dnsmasq-dns" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.975005 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="de138628-d2b3-44ef-8043-b9aaf8f11615" containerName="mariadb-account-create-update" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.975740 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:43 crc kubenswrapper[5017]: I0129 06:52:43.979904 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.000334 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtkrb-config-5mbld"] Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.072036 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run-ovn\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.072115 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffvg9\" (UniqueName: \"kubernetes.io/projected/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-kube-api-access-ffvg9\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.072382 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.072458 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-additional-scripts\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.072487 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-scripts\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.072534 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-log-ovn\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.174543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.174614 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-additional-scripts\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.174634 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-scripts\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.174659 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-log-ovn\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.174703 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run-ovn\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.174731 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffvg9\" (UniqueName: \"kubernetes.io/projected/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-kube-api-access-ffvg9\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.175568 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-log-ovn\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.175674 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run-ovn\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.176347 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-additional-scripts\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.177407 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-scripts\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.177464 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.200695 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffvg9\" (UniqueName: \"kubernetes.io/projected/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-kube-api-access-ffvg9\") pod \"ovn-controller-rtkrb-config-5mbld\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.291683 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2qnw2"] Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.292940 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.298100 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.298715 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5m2bw" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.312784 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2qnw2"] Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.378378 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-config-data\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.378474 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-db-sync-config-data\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.378531 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jgl\" (UniqueName: \"kubernetes.io/projected/c06828d9-6c4d-4228-adc6-3788f22ae732-kube-api-access-v8jgl\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.378717 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-combined-ca-bundle\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.404776 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.480510 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-db-sync-config-data\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.480631 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jgl\" (UniqueName: \"kubernetes.io/projected/c06828d9-6c4d-4228-adc6-3788f22ae732-kube-api-access-v8jgl\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.480690 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-combined-ca-bundle\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.480802 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-config-data\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.485069 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-db-sync-config-data\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.485145 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-config-data\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.486554 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-combined-ca-bundle\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.504737 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jgl\" (UniqueName: \"kubernetes.io/projected/c06828d9-6c4d-4228-adc6-3788f22ae732-kube-api-access-v8jgl\") pod \"glance-db-sync-2qnw2\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.610524 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2qnw2" Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.865826 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtkrb-config-5mbld"] Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.924779 5017 generic.go:334] "Generic (PLEG): container finished" podID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerID="5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb" exitCode=0 Jan 29 06:52:44 crc kubenswrapper[5017]: I0129 06:52:44.924841 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a","Type":"ContainerDied","Data":"5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb"} Jan 29 06:52:45 crc kubenswrapper[5017]: I0129 06:52:45.422332 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2qnw2"] Jan 29 06:52:45 crc kubenswrapper[5017]: I0129 06:52:45.952967 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d30b013f-453f-4282-8b22-2a5270027828","Type":"ContainerStarted","Data":"023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d"} Jan 29 06:52:45 crc kubenswrapper[5017]: I0129 06:52:45.953739 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:52:45 crc kubenswrapper[5017]: I0129 06:52:45.957145 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb-config-5mbld" event={"ID":"23bad58b-4655-4cb9-8de5-9b5a4c37a23a","Type":"ContainerStarted","Data":"3fbc7f0307ba48e49b763df81a77e31cf68a82ad47bf30dc6675ba61085c7a13"} Jan 29 06:52:45 crc kubenswrapper[5017]: I0129 06:52:45.957196 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb-config-5mbld" event={"ID":"23bad58b-4655-4cb9-8de5-9b5a4c37a23a","Type":"ContainerStarted","Data":"0b142b786a8b3c5e6fdb79864071eec11f62d20796a2ff35d6e036c591b5c715"} Jan 29 06:52:45 crc kubenswrapper[5017]: I0129 06:52:45.959402 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a","Type":"ContainerStarted","Data":"31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a"} Jan 29 06:52:45 crc kubenswrapper[5017]: I0129 06:52:45.959785 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 06:52:45 crc kubenswrapper[5017]: I0129 06:52:45.962373 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2qnw2" event={"ID":"c06828d9-6c4d-4228-adc6-3788f22ae732","Type":"ContainerStarted","Data":"22e58967521156eea427d7e0c0c5d6385477761824e6da4b0838f191913aaa57"} Jan 29 06:52:45 crc kubenswrapper[5017]: I0129 06:52:45.989859 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.137722674 podStartE2EDuration="1m12.989821511s" podCreationTimestamp="2026-01-29 06:51:33 +0000 UTC" firstStartedPulling="2026-01-29 06:51:35.198241184 +0000 UTC m=+981.572688794" lastFinishedPulling="2026-01-29 06:52:10.050340021 +0000 UTC m=+1016.424787631" observedRunningTime="2026-01-29 06:52:45.985030814 +0000 UTC m=+1052.359478424" watchObservedRunningTime="2026-01-29 06:52:45.989821511 +0000 UTC m=+1052.364269121" Jan 29 06:52:46 crc kubenswrapper[5017]: I0129 06:52:46.049613 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.189367609 podStartE2EDuration="1m12.049584936s" podCreationTimestamp="2026-01-29 06:51:34 +0000 UTC" firstStartedPulling="2026-01-29 06:51:36.091709531 +0000 UTC m=+982.466157141" lastFinishedPulling="2026-01-29 06:52:09.951926828 +0000 UTC m=+1016.326374468" observedRunningTime="2026-01-29 06:52:46.041444697 +0000 UTC m=+1052.415892317" watchObservedRunningTime="2026-01-29 06:52:46.049584936 +0000 UTC m=+1052.424032556" Jan 29 06:52:46 crc kubenswrapper[5017]: I0129 06:52:46.976362 5017 generic.go:334] "Generic (PLEG): container finished" podID="23bad58b-4655-4cb9-8de5-9b5a4c37a23a" containerID="3fbc7f0307ba48e49b763df81a77e31cf68a82ad47bf30dc6675ba61085c7a13" exitCode=0 Jan 29 06:52:46 crc kubenswrapper[5017]: I0129 06:52:46.976537 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb-config-5mbld" event={"ID":"23bad58b-4655-4cb9-8de5-9b5a4c37a23a","Type":"ContainerDied","Data":"3fbc7f0307ba48e49b763df81a77e31cf68a82ad47bf30dc6675ba61085c7a13"} Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.009520 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cmvrz"] Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.017737 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cmvrz"] Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.349999 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.471155 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run-ovn\") pod \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.471288 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-additional-scripts\") pod \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.471333 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "23bad58b-4655-4cb9-8de5-9b5a4c37a23a" (UID: "23bad58b-4655-4cb9-8de5-9b5a4c37a23a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.471372 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run\") pod \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.471398 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-log-ovn\") pod \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.471444 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-scripts\") pod \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.471492 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffvg9\" (UniqueName: \"kubernetes.io/projected/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-kube-api-access-ffvg9\") pod \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\" (UID: \"23bad58b-4655-4cb9-8de5-9b5a4c37a23a\") " Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.472929 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "23bad58b-4655-4cb9-8de5-9b5a4c37a23a" (UID: "23bad58b-4655-4cb9-8de5-9b5a4c37a23a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.472965 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run" (OuterVolumeSpecName: "var-run") pod "23bad58b-4655-4cb9-8de5-9b5a4c37a23a" (UID: "23bad58b-4655-4cb9-8de5-9b5a4c37a23a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.485509 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-scripts" (OuterVolumeSpecName: "scripts") pod "23bad58b-4655-4cb9-8de5-9b5a4c37a23a" (UID: "23bad58b-4655-4cb9-8de5-9b5a4c37a23a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.492244 5017 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.492290 5017 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.492305 5017 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.492318 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.496109 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-kube-api-access-ffvg9" (OuterVolumeSpecName: "kube-api-access-ffvg9") pod "23bad58b-4655-4cb9-8de5-9b5a4c37a23a" (UID: "23bad58b-4655-4cb9-8de5-9b5a4c37a23a"). InnerVolumeSpecName "kube-api-access-ffvg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.497556 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "23bad58b-4655-4cb9-8de5-9b5a4c37a23a" (UID: "23bad58b-4655-4cb9-8de5-9b5a4c37a23a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.594790 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffvg9\" (UniqueName: \"kubernetes.io/projected/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-kube-api-access-ffvg9\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.594845 5017 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23bad58b-4655-4cb9-8de5-9b5a4c37a23a-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.799401 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:52:47 crc kubenswrapper[5017]: E0129 06:52:47.799599 5017 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 06:52:47 crc kubenswrapper[5017]: E0129 06:52:47.799700 5017 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 06:52:47 crc kubenswrapper[5017]: E0129 06:52:47.799777 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift podName:6d082326-495c-4078-974e-714379243884 nodeName:}" failed. No retries permitted until 2026-01-29 06:53:03.79975635 +0000 UTC m=+1070.174203960 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift") pod "swift-storage-0" (UID: "6d082326-495c-4078-974e-714379243884") : configmap "swift-ring-files" not found Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.988750 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb-config-5mbld" event={"ID":"23bad58b-4655-4cb9-8de5-9b5a4c37a23a","Type":"ContainerDied","Data":"0b142b786a8b3c5e6fdb79864071eec11f62d20796a2ff35d6e036c591b5c715"} Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.988804 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b142b786a8b3c5e6fdb79864071eec11f62d20796a2ff35d6e036c591b5c715" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.988819 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb-config-5mbld" Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.992833 5017 generic.go:334] "Generic (PLEG): container finished" podID="4991fdbc-2d83-45dd-91a3-b312347ff317" containerID="d6a842890092872bc99ef3cc4b7a0a17a34027b3b1b59d333f9b2250a24984af" exitCode=0 Jan 29 06:52:47 crc kubenswrapper[5017]: I0129 06:52:47.992883 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fc5n9" event={"ID":"4991fdbc-2d83-45dd-91a3-b312347ff317","Type":"ContainerDied","Data":"d6a842890092872bc99ef3cc4b7a0a17a34027b3b1b59d333f9b2250a24984af"} Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.329653 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de138628-d2b3-44ef-8043-b9aaf8f11615" path="/var/lib/kubelet/pods/de138628-d2b3-44ef-8043-b9aaf8f11615/volumes" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.478889 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rtkrb-config-5mbld"] Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.487508 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rtkrb-config-5mbld"] Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.574408 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rtkrb-config-747fl"] Jan 29 06:52:48 crc kubenswrapper[5017]: E0129 06:52:48.574938 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23bad58b-4655-4cb9-8de5-9b5a4c37a23a" containerName="ovn-config" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.574979 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="23bad58b-4655-4cb9-8de5-9b5a4c37a23a" containerName="ovn-config" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.576973 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="23bad58b-4655-4cb9-8de5-9b5a4c37a23a" containerName="ovn-config" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.577583 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.582340 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.592673 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtkrb-config-747fl"] Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.622149 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-scripts\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.622207 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.622259 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-additional-scripts\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.622344 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zttcf\" (UniqueName: \"kubernetes.io/projected/23f5eb62-17f5-4d9a-a81a-5981e448f29c-kube-api-access-zttcf\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.622376 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-log-ovn\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.622408 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run-ovn\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.673531 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rtkrb" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.724036 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run-ovn\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.724167 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-scripts\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.724191 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.724250 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-additional-scripts\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.724292 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zttcf\" (UniqueName: \"kubernetes.io/projected/23f5eb62-17f5-4d9a-a81a-5981e448f29c-kube-api-access-zttcf\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.724337 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-log-ovn\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.724420 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run-ovn\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.724457 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-log-ovn\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.724511 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.725460 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-additional-scripts\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.726418 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-scripts\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.766619 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zttcf\" (UniqueName: \"kubernetes.io/projected/23f5eb62-17f5-4d9a-a81a-5981e448f29c-kube-api-access-zttcf\") pod \"ovn-controller-rtkrb-config-747fl\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:48 crc kubenswrapper[5017]: I0129 06:52:48.900250 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.446997 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.540130 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-ring-data-devices\") pod \"4991fdbc-2d83-45dd-91a3-b312347ff317\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.540212 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4991fdbc-2d83-45dd-91a3-b312347ff317-etc-swift\") pod \"4991fdbc-2d83-45dd-91a3-b312347ff317\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.540270 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxlr8\" (UniqueName: \"kubernetes.io/projected/4991fdbc-2d83-45dd-91a3-b312347ff317-kube-api-access-kxlr8\") pod \"4991fdbc-2d83-45dd-91a3-b312347ff317\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.540406 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-swiftconf\") pod \"4991fdbc-2d83-45dd-91a3-b312347ff317\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.540455 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-scripts\") pod \"4991fdbc-2d83-45dd-91a3-b312347ff317\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.540507 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-dispersionconf\") pod \"4991fdbc-2d83-45dd-91a3-b312347ff317\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.540620 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-combined-ca-bundle\") pod \"4991fdbc-2d83-45dd-91a3-b312347ff317\" (UID: \"4991fdbc-2d83-45dd-91a3-b312347ff317\") " Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.543113 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4991fdbc-2d83-45dd-91a3-b312347ff317" (UID: "4991fdbc-2d83-45dd-91a3-b312347ff317"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.543470 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4991fdbc-2d83-45dd-91a3-b312347ff317-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4991fdbc-2d83-45dd-91a3-b312347ff317" (UID: "4991fdbc-2d83-45dd-91a3-b312347ff317"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.547085 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4991fdbc-2d83-45dd-91a3-b312347ff317-kube-api-access-kxlr8" (OuterVolumeSpecName: "kube-api-access-kxlr8") pod "4991fdbc-2d83-45dd-91a3-b312347ff317" (UID: "4991fdbc-2d83-45dd-91a3-b312347ff317"). InnerVolumeSpecName "kube-api-access-kxlr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.559864 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4991fdbc-2d83-45dd-91a3-b312347ff317" (UID: "4991fdbc-2d83-45dd-91a3-b312347ff317"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.567808 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-scripts" (OuterVolumeSpecName: "scripts") pod "4991fdbc-2d83-45dd-91a3-b312347ff317" (UID: "4991fdbc-2d83-45dd-91a3-b312347ff317"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.569847 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4991fdbc-2d83-45dd-91a3-b312347ff317" (UID: "4991fdbc-2d83-45dd-91a3-b312347ff317"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.586273 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4991fdbc-2d83-45dd-91a3-b312347ff317" (UID: "4991fdbc-2d83-45dd-91a3-b312347ff317"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.599607 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rtkrb-config-747fl"] Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.643509 5017 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.643555 5017 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4991fdbc-2d83-45dd-91a3-b312347ff317-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.643568 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxlr8\" (UniqueName: \"kubernetes.io/projected/4991fdbc-2d83-45dd-91a3-b312347ff317-kube-api-access-kxlr8\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.643580 5017 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.643592 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4991fdbc-2d83-45dd-91a3-b312347ff317-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.643618 5017 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:49 crc kubenswrapper[5017]: I0129 06:52:49.643627 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4991fdbc-2d83-45dd-91a3-b312347ff317-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:52:50 crc kubenswrapper[5017]: I0129 06:52:50.031861 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb-config-747fl" event={"ID":"23f5eb62-17f5-4d9a-a81a-5981e448f29c","Type":"ContainerStarted","Data":"d8893e43c97db3642630ad5123ebf5beb52136d708962a0e4af92dc35927a00f"} Jan 29 06:52:50 crc kubenswrapper[5017]: I0129 06:52:50.032437 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb-config-747fl" event={"ID":"23f5eb62-17f5-4d9a-a81a-5981e448f29c","Type":"ContainerStarted","Data":"c8ae61f5ebe7d94966c2ce080086027c19dc4598d9bdc3bccf809e398754f71d"} Jan 29 06:52:50 crc kubenswrapper[5017]: I0129 06:52:50.038896 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fc5n9" event={"ID":"4991fdbc-2d83-45dd-91a3-b312347ff317","Type":"ContainerDied","Data":"9c663bf50ccd2dd17d93c053806918f93c57fda9a2bab7348ff9c942a9e43a62"} Jan 29 06:52:50 crc kubenswrapper[5017]: I0129 06:52:50.039321 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c663bf50ccd2dd17d93c053806918f93c57fda9a2bab7348ff9c942a9e43a62" Jan 29 06:52:50 crc kubenswrapper[5017]: I0129 06:52:50.039600 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fc5n9" Jan 29 06:52:50 crc kubenswrapper[5017]: I0129 06:52:50.087710 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rtkrb-config-747fl" podStartSLOduration=2.087681079 podStartE2EDuration="2.087681079s" podCreationTimestamp="2026-01-29 06:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:52:50.059877277 +0000 UTC m=+1056.434324887" watchObservedRunningTime="2026-01-29 06:52:50.087681079 +0000 UTC m=+1056.462128689" Jan 29 06:52:50 crc kubenswrapper[5017]: I0129 06:52:50.329022 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23bad58b-4655-4cb9-8de5-9b5a4c37a23a" path="/var/lib/kubelet/pods/23bad58b-4655-4cb9-8de5-9b5a4c37a23a/volumes" Jan 29 06:52:51 crc kubenswrapper[5017]: I0129 06:52:51.050623 5017 generic.go:334] "Generic (PLEG): container finished" podID="23f5eb62-17f5-4d9a-a81a-5981e448f29c" containerID="d8893e43c97db3642630ad5123ebf5beb52136d708962a0e4af92dc35927a00f" exitCode=0 Jan 29 06:52:51 crc kubenswrapper[5017]: I0129 06:52:51.050702 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb-config-747fl" event={"ID":"23f5eb62-17f5-4d9a-a81a-5981e448f29c","Type":"ContainerDied","Data":"d8893e43c97db3642630ad5123ebf5beb52136d708962a0e4af92dc35927a00f"} Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.032024 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-btm9w"] Jan 29 06:52:52 crc kubenswrapper[5017]: E0129 06:52:52.032548 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4991fdbc-2d83-45dd-91a3-b312347ff317" containerName="swift-ring-rebalance" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.032569 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4991fdbc-2d83-45dd-91a3-b312347ff317" containerName="swift-ring-rebalance" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.032758 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4991fdbc-2d83-45dd-91a3-b312347ff317" containerName="swift-ring-rebalance" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.033626 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btm9w" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.037336 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.047284 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btm9w"] Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.105855 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c059fe-689f-478d-8e75-83d893147d85-operator-scripts\") pod \"root-account-create-update-btm9w\" (UID: \"e6c059fe-689f-478d-8e75-83d893147d85\") " pod="openstack/root-account-create-update-btm9w" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.106246 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4wmx\" (UniqueName: \"kubernetes.io/projected/e6c059fe-689f-478d-8e75-83d893147d85-kube-api-access-q4wmx\") pod \"root-account-create-update-btm9w\" (UID: \"e6c059fe-689f-478d-8e75-83d893147d85\") " pod="openstack/root-account-create-update-btm9w" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.211131 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c059fe-689f-478d-8e75-83d893147d85-operator-scripts\") pod \"root-account-create-update-btm9w\" (UID: \"e6c059fe-689f-478d-8e75-83d893147d85\") " pod="openstack/root-account-create-update-btm9w" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.211224 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4wmx\" (UniqueName: \"kubernetes.io/projected/e6c059fe-689f-478d-8e75-83d893147d85-kube-api-access-q4wmx\") pod \"root-account-create-update-btm9w\" (UID: \"e6c059fe-689f-478d-8e75-83d893147d85\") " pod="openstack/root-account-create-update-btm9w" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.216010 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c059fe-689f-478d-8e75-83d893147d85-operator-scripts\") pod \"root-account-create-update-btm9w\" (UID: \"e6c059fe-689f-478d-8e75-83d893147d85\") " pod="openstack/root-account-create-update-btm9w" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.233155 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4wmx\" (UniqueName: \"kubernetes.io/projected/e6c059fe-689f-478d-8e75-83d893147d85-kube-api-access-q4wmx\") pod \"root-account-create-update-btm9w\" (UID: \"e6c059fe-689f-478d-8e75-83d893147d85\") " pod="openstack/root-account-create-update-btm9w" Jan 29 06:52:52 crc kubenswrapper[5017]: I0129 06:52:52.369515 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btm9w" Jan 29 06:52:55 crc kubenswrapper[5017]: I0129 06:52:55.378358 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.404614 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.538920 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-log-ovn\") pod \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.539054 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-scripts\") pod \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.539098 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run-ovn\") pod \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.539165 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-additional-scripts\") pod \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.539163 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "23f5eb62-17f5-4d9a-a81a-5981e448f29c" (UID: "23f5eb62-17f5-4d9a-a81a-5981e448f29c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.539296 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run\") pod \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.539408 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zttcf\" (UniqueName: \"kubernetes.io/projected/23f5eb62-17f5-4d9a-a81a-5981e448f29c-kube-api-access-zttcf\") pod \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\" (UID: \"23f5eb62-17f5-4d9a-a81a-5981e448f29c\") " Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.539424 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "23f5eb62-17f5-4d9a-a81a-5981e448f29c" (UID: "23f5eb62-17f5-4d9a-a81a-5981e448f29c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.539706 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run" (OuterVolumeSpecName: "var-run") pod "23f5eb62-17f5-4d9a-a81a-5981e448f29c" (UID: "23f5eb62-17f5-4d9a-a81a-5981e448f29c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.540573 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "23f5eb62-17f5-4d9a-a81a-5981e448f29c" (UID: "23f5eb62-17f5-4d9a-a81a-5981e448f29c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.542761 5017 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.542788 5017 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.542802 5017 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/23f5eb62-17f5-4d9a-a81a-5981e448f29c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.542816 5017 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.544071 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-scripts" (OuterVolumeSpecName: "scripts") pod "23f5eb62-17f5-4d9a-a81a-5981e448f29c" (UID: "23f5eb62-17f5-4d9a-a81a-5981e448f29c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.544543 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f5eb62-17f5-4d9a-a81a-5981e448f29c-kube-api-access-zttcf" (OuterVolumeSpecName: "kube-api-access-zttcf") pod "23f5eb62-17f5-4d9a-a81a-5981e448f29c" (UID: "23f5eb62-17f5-4d9a-a81a-5981e448f29c"). InnerVolumeSpecName "kube-api-access-zttcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.644222 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23f5eb62-17f5-4d9a-a81a-5981e448f29c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.644265 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zttcf\" (UniqueName: \"kubernetes.io/projected/23f5eb62-17f5-4d9a-a81a-5981e448f29c-kube-api-access-zttcf\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:01 crc kubenswrapper[5017]: I0129 06:53:01.718571 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-btm9w"] Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.189519 5017 generic.go:334] "Generic (PLEG): container finished" podID="e6c059fe-689f-478d-8e75-83d893147d85" containerID="ae39ebd849460fe9df6348a2030070d4372c857c8e320a52e9f26c2c698a8ef8" exitCode=0 Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.189589 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btm9w" event={"ID":"e6c059fe-689f-478d-8e75-83d893147d85","Type":"ContainerDied","Data":"ae39ebd849460fe9df6348a2030070d4372c857c8e320a52e9f26c2c698a8ef8"} Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.189882 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btm9w" event={"ID":"e6c059fe-689f-478d-8e75-83d893147d85","Type":"ContainerStarted","Data":"0db99d94ad54e02bbc1d2c748fc8f6ec50863b5c639be139b779dfe4bf1b36de"} Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.191415 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2qnw2" event={"ID":"c06828d9-6c4d-4228-adc6-3788f22ae732","Type":"ContainerStarted","Data":"ad39d19c0eae6d091513b0dda8def0c81ac302c26687b45f9f7175e673527e1c"} Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.194157 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb-config-747fl" event={"ID":"23f5eb62-17f5-4d9a-a81a-5981e448f29c","Type":"ContainerDied","Data":"c8ae61f5ebe7d94966c2ce080086027c19dc4598d9bdc3bccf809e398754f71d"} Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.194238 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ae61f5ebe7d94966c2ce080086027c19dc4598d9bdc3bccf809e398754f71d" Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.194183 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb-config-747fl" Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.241382 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2qnw2" podStartSLOduration=2.381457383 podStartE2EDuration="18.241359542s" podCreationTimestamp="2026-01-29 06:52:44 +0000 UTC" firstStartedPulling="2026-01-29 06:52:45.434591797 +0000 UTC m=+1051.809039407" lastFinishedPulling="2026-01-29 06:53:01.294493946 +0000 UTC m=+1067.668941566" observedRunningTime="2026-01-29 06:53:02.239088297 +0000 UTC m=+1068.613535907" watchObservedRunningTime="2026-01-29 06:53:02.241359542 +0000 UTC m=+1068.615807162" Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.510584 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rtkrb-config-747fl"] Jan 29 06:53:02 crc kubenswrapper[5017]: I0129 06:53:02.519558 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rtkrb-config-747fl"] Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.666302 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btm9w" Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.691285 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4wmx\" (UniqueName: \"kubernetes.io/projected/e6c059fe-689f-478d-8e75-83d893147d85-kube-api-access-q4wmx\") pod \"e6c059fe-689f-478d-8e75-83d893147d85\" (UID: \"e6c059fe-689f-478d-8e75-83d893147d85\") " Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.691430 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c059fe-689f-478d-8e75-83d893147d85-operator-scripts\") pod \"e6c059fe-689f-478d-8e75-83d893147d85\" (UID: \"e6c059fe-689f-478d-8e75-83d893147d85\") " Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.692670 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c059fe-689f-478d-8e75-83d893147d85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6c059fe-689f-478d-8e75-83d893147d85" (UID: "e6c059fe-689f-478d-8e75-83d893147d85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.711766 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c059fe-689f-478d-8e75-83d893147d85-kube-api-access-q4wmx" (OuterVolumeSpecName: "kube-api-access-q4wmx") pod "e6c059fe-689f-478d-8e75-83d893147d85" (UID: "e6c059fe-689f-478d-8e75-83d893147d85"). InnerVolumeSpecName "kube-api-access-q4wmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.793418 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c059fe-689f-478d-8e75-83d893147d85-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.793456 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4wmx\" (UniqueName: \"kubernetes.io/projected/e6c059fe-689f-478d-8e75-83d893147d85-kube-api-access-q4wmx\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.894274 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.902667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"swift-storage-0\" (UID: \"6d082326-495c-4078-974e-714379243884\") " pod="openstack/swift-storage-0" Jan 29 06:53:03 crc kubenswrapper[5017]: I0129 06:53:03.982680 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 06:53:04 crc kubenswrapper[5017]: I0129 06:53:04.229129 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-btm9w" event={"ID":"e6c059fe-689f-478d-8e75-83d893147d85","Type":"ContainerDied","Data":"0db99d94ad54e02bbc1d2c748fc8f6ec50863b5c639be139b779dfe4bf1b36de"} Jan 29 06:53:04 crc kubenswrapper[5017]: I0129 06:53:04.229578 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0db99d94ad54e02bbc1d2c748fc8f6ec50863b5c639be139b779dfe4bf1b36de" Jan 29 06:53:04 crc kubenswrapper[5017]: I0129 06:53:04.229720 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-btm9w" Jan 29 06:53:04 crc kubenswrapper[5017]: I0129 06:53:04.330679 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f5eb62-17f5-4d9a-a81a-5981e448f29c" path="/var/lib/kubelet/pods/23f5eb62-17f5-4d9a-a81a-5981e448f29c/volumes" Jan 29 06:53:04 crc kubenswrapper[5017]: I0129 06:53:04.565716 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 06:53:04 crc kubenswrapper[5017]: W0129 06:53:04.571494 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d082326_495c_4078_974e_714379243884.slice/crio-77205f1116f100fd849b9e7c1d3100fdca7c6527b145fa74621d2a15a78e0d5d WatchSource:0}: Error finding container 77205f1116f100fd849b9e7c1d3100fdca7c6527b145fa74621d2a15a78e0d5d: Status 404 returned error can't find the container with id 77205f1116f100fd849b9e7c1d3100fdca7c6527b145fa74621d2a15a78e0d5d Jan 29 06:53:04 crc kubenswrapper[5017]: I0129 06:53:04.701431 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:53:05 crc kubenswrapper[5017]: I0129 06:53:05.245326 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"77205f1116f100fd849b9e7c1d3100fdca7c6527b145fa74621d2a15a78e0d5d"} Jan 29 06:53:05 crc kubenswrapper[5017]: I0129 06:53:05.380263 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.100069 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-381c-account-create-update-d4zjm"] Jan 29 06:53:07 crc kubenswrapper[5017]: E0129 06:53:07.101583 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f5eb62-17f5-4d9a-a81a-5981e448f29c" containerName="ovn-config" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.101602 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f5eb62-17f5-4d9a-a81a-5981e448f29c" containerName="ovn-config" Jan 29 06:53:07 crc kubenswrapper[5017]: E0129 06:53:07.101627 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c059fe-689f-478d-8e75-83d893147d85" containerName="mariadb-account-create-update" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.101634 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c059fe-689f-478d-8e75-83d893147d85" containerName="mariadb-account-create-update" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.101827 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c059fe-689f-478d-8e75-83d893147d85" containerName="mariadb-account-create-update" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.101846 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f5eb62-17f5-4d9a-a81a-5981e448f29c" containerName="ovn-config" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.102583 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.112775 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-5997q"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.114262 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5997q" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.116523 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.125798 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-381c-account-create-update-d4zjm"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.140927 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5997q"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.241019 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-t6sx7"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.242288 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.255240 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp5q7\" (UniqueName: \"kubernetes.io/projected/2179f134-a047-4000-b58b-4755df9f56b7-kube-api-access-dp5q7\") pod \"barbican-381c-account-create-update-d4zjm\" (UID: \"2179f134-a047-4000-b58b-4755df9f56b7\") " pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.255381 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2179f134-a047-4000-b58b-4755df9f56b7-operator-scripts\") pod \"barbican-381c-account-create-update-d4zjm\" (UID: \"2179f134-a047-4000-b58b-4755df9f56b7\") " pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.255423 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhx2t\" (UniqueName: \"kubernetes.io/projected/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-kube-api-access-nhx2t\") pod \"cinder-db-create-5997q\" (UID: \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\") " pod="openstack/cinder-db-create-5997q" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.255669 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-operator-scripts\") pod \"cinder-db-create-5997q\" (UID: \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\") " pod="openstack/cinder-db-create-5997q" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.284149 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-t6sx7"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.287407 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237"} Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.357563 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-operator-scripts\") pod \"cinder-db-create-5997q\" (UID: \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\") " pod="openstack/cinder-db-create-5997q" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.357711 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8600c4aa-101a-4803-b8c8-7313e2742c6c-operator-scripts\") pod \"barbican-db-create-t6sx7\" (UID: \"8600c4aa-101a-4803-b8c8-7313e2742c6c\") " pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.357783 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp5q7\" (UniqueName: \"kubernetes.io/projected/2179f134-a047-4000-b58b-4755df9f56b7-kube-api-access-dp5q7\") pod \"barbican-381c-account-create-update-d4zjm\" (UID: \"2179f134-a047-4000-b58b-4755df9f56b7\") " pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.357845 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjjxj\" (UniqueName: \"kubernetes.io/projected/8600c4aa-101a-4803-b8c8-7313e2742c6c-kube-api-access-vjjxj\") pod \"barbican-db-create-t6sx7\" (UID: \"8600c4aa-101a-4803-b8c8-7313e2742c6c\") " pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.357877 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2179f134-a047-4000-b58b-4755df9f56b7-operator-scripts\") pod \"barbican-381c-account-create-update-d4zjm\" (UID: \"2179f134-a047-4000-b58b-4755df9f56b7\") " pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.357899 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhx2t\" (UniqueName: \"kubernetes.io/projected/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-kube-api-access-nhx2t\") pod \"cinder-db-create-5997q\" (UID: \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\") " pod="openstack/cinder-db-create-5997q" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.360898 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-operator-scripts\") pod \"cinder-db-create-5997q\" (UID: \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\") " pod="openstack/cinder-db-create-5997q" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.360923 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2179f134-a047-4000-b58b-4755df9f56b7-operator-scripts\") pod \"barbican-381c-account-create-update-d4zjm\" (UID: \"2179f134-a047-4000-b58b-4755df9f56b7\") " pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.374611 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-np6m4"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.375702 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.394570 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhx2t\" (UniqueName: \"kubernetes.io/projected/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-kube-api-access-nhx2t\") pod \"cinder-db-create-5997q\" (UID: \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\") " pod="openstack/cinder-db-create-5997q" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.400065 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.400700 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rv2nk" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.401085 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.401406 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.404484 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp5q7\" (UniqueName: \"kubernetes.io/projected/2179f134-a047-4000-b58b-4755df9f56b7-kube-api-access-dp5q7\") pod \"barbican-381c-account-create-update-d4zjm\" (UID: \"2179f134-a047-4000-b58b-4755df9f56b7\") " pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.423714 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.442171 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-np6m4"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.459640 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmbq\" (UniqueName: \"kubernetes.io/projected/b1e28709-36ce-4df4-8ec9-2ac9458b87da-kube-api-access-fxmbq\") pod \"keystone-db-sync-np6m4\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.459875 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-config-data\") pod \"keystone-db-sync-np6m4\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.460039 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjjxj\" (UniqueName: \"kubernetes.io/projected/8600c4aa-101a-4803-b8c8-7313e2742c6c-kube-api-access-vjjxj\") pod \"barbican-db-create-t6sx7\" (UID: \"8600c4aa-101a-4803-b8c8-7313e2742c6c\") " pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.460196 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-combined-ca-bundle\") pod \"keystone-db-sync-np6m4\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.460339 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8600c4aa-101a-4803-b8c8-7313e2742c6c-operator-scripts\") pod \"barbican-db-create-t6sx7\" (UID: \"8600c4aa-101a-4803-b8c8-7313e2742c6c\") " pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.474469 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8600c4aa-101a-4803-b8c8-7313e2742c6c-operator-scripts\") pod \"barbican-db-create-t6sx7\" (UID: \"8600c4aa-101a-4803-b8c8-7313e2742c6c\") " pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.485076 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5997q" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.510801 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjjxj\" (UniqueName: \"kubernetes.io/projected/8600c4aa-101a-4803-b8c8-7313e2742c6c-kube-api-access-vjjxj\") pod \"barbican-db-create-t6sx7\" (UID: \"8600c4aa-101a-4803-b8c8-7313e2742c6c\") " pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.565751 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.566512 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gmq8s"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.567873 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxmbq\" (UniqueName: \"kubernetes.io/projected/b1e28709-36ce-4df4-8ec9-2ac9458b87da-kube-api-access-fxmbq\") pod \"keystone-db-sync-np6m4\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.570756 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-config-data\") pod \"keystone-db-sync-np6m4\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.570922 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-combined-ca-bundle\") pod \"keystone-db-sync-np6m4\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.571236 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.583029 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-combined-ca-bundle\") pod \"keystone-db-sync-np6m4\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.583905 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-config-data\") pod \"keystone-db-sync-np6m4\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.589993 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gmq8s"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.614920 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2d72-account-create-update-cdsn8"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.616495 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.618713 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.626901 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxmbq\" (UniqueName: \"kubernetes.io/projected/b1e28709-36ce-4df4-8ec9-2ac9458b87da-kube-api-access-fxmbq\") pod \"keystone-db-sync-np6m4\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.638498 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2d72-account-create-update-cdsn8"] Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.675315 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5g75\" (UniqueName: \"kubernetes.io/projected/3505bbb1-d190-470a-84e7-18e9b3330a2f-kube-api-access-f5g75\") pod \"neutron-db-create-gmq8s\" (UID: \"3505bbb1-d190-470a-84e7-18e9b3330a2f\") " pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:07 crc kubenswrapper[5017]: I0129 06:53:07.675664 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3505bbb1-d190-470a-84e7-18e9b3330a2f-operator-scripts\") pod \"neutron-db-create-gmq8s\" (UID: \"3505bbb1-d190-470a-84e7-18e9b3330a2f\") " pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.778677 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hgdz\" (UniqueName: \"kubernetes.io/projected/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-kube-api-access-4hgdz\") pod \"neutron-2d72-account-create-update-cdsn8\" (UID: \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\") " pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.778760 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5g75\" (UniqueName: \"kubernetes.io/projected/3505bbb1-d190-470a-84e7-18e9b3330a2f-kube-api-access-f5g75\") pod \"neutron-db-create-gmq8s\" (UID: \"3505bbb1-d190-470a-84e7-18e9b3330a2f\") " pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.778802 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-operator-scripts\") pod \"neutron-2d72-account-create-update-cdsn8\" (UID: \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\") " pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.778838 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3505bbb1-d190-470a-84e7-18e9b3330a2f-operator-scripts\") pod \"neutron-db-create-gmq8s\" (UID: \"3505bbb1-d190-470a-84e7-18e9b3330a2f\") " pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.781399 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3505bbb1-d190-470a-84e7-18e9b3330a2f-operator-scripts\") pod \"neutron-db-create-gmq8s\" (UID: \"3505bbb1-d190-470a-84e7-18e9b3330a2f\") " pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.791640 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-01d7-account-create-update-bpmlf"] Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.793190 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.797706 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.806417 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5g75\" (UniqueName: \"kubernetes.io/projected/3505bbb1-d190-470a-84e7-18e9b3330a2f-kube-api-access-f5g75\") pod \"neutron-db-create-gmq8s\" (UID: \"3505bbb1-d190-470a-84e7-18e9b3330a2f\") " pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.812348 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-01d7-account-create-update-bpmlf"] Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.880724 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hlx8\" (UniqueName: \"kubernetes.io/projected/ea1b3f91-cc75-43b7-838d-837b273b3509-kube-api-access-5hlx8\") pod \"cinder-01d7-account-create-update-bpmlf\" (UID: \"ea1b3f91-cc75-43b7-838d-837b273b3509\") " pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.880805 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-operator-scripts\") pod \"neutron-2d72-account-create-update-cdsn8\" (UID: \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\") " pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.880844 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1b3f91-cc75-43b7-838d-837b273b3509-operator-scripts\") pod \"cinder-01d7-account-create-update-bpmlf\" (UID: \"ea1b3f91-cc75-43b7-838d-837b273b3509\") " pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.881332 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hgdz\" (UniqueName: \"kubernetes.io/projected/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-kube-api-access-4hgdz\") pod \"neutron-2d72-account-create-update-cdsn8\" (UID: \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\") " pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.881807 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-operator-scripts\") pod \"neutron-2d72-account-create-update-cdsn8\" (UID: \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\") " pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.903862 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hgdz\" (UniqueName: \"kubernetes.io/projected/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-kube-api-access-4hgdz\") pod \"neutron-2d72-account-create-update-cdsn8\" (UID: \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\") " pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.918682 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.936734 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.978064 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.983902 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hlx8\" (UniqueName: \"kubernetes.io/projected/ea1b3f91-cc75-43b7-838d-837b273b3509-kube-api-access-5hlx8\") pod \"cinder-01d7-account-create-update-bpmlf\" (UID: \"ea1b3f91-cc75-43b7-838d-837b273b3509\") " pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.984023 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1b3f91-cc75-43b7-838d-837b273b3509-operator-scripts\") pod \"cinder-01d7-account-create-update-bpmlf\" (UID: \"ea1b3f91-cc75-43b7-838d-837b273b3509\") " pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:07.985083 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1b3f91-cc75-43b7-838d-837b273b3509-operator-scripts\") pod \"cinder-01d7-account-create-update-bpmlf\" (UID: \"ea1b3f91-cc75-43b7-838d-837b273b3509\") " pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:08.006521 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hlx8\" (UniqueName: \"kubernetes.io/projected/ea1b3f91-cc75-43b7-838d-837b273b3509-kube-api-access-5hlx8\") pod \"cinder-01d7-account-create-update-bpmlf\" (UID: \"ea1b3f91-cc75-43b7-838d-837b273b3509\") " pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:10 crc kubenswrapper[5017]: I0129 06:53:08.132619 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.288661 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-381c-account-create-update-d4zjm"] Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.385813 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02"} Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.385918 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea"} Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.385931 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc"} Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.396074 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-np6m4"] Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.396777 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-381c-account-create-update-d4zjm" event={"ID":"2179f134-a047-4000-b58b-4755df9f56b7","Type":"ContainerStarted","Data":"a094ae5c3e6508bb7c6c76f4dd0f735ab36bae9ee243bd954a1bdeae7e8886b7"} Jan 29 06:53:11 crc kubenswrapper[5017]: W0129 06:53:11.408268 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1e28709_36ce_4df4_8ec9_2ac9458b87da.slice/crio-65250481f952d1649d781d3a6eb31da6aaa397b79c96591b2018c7f9a006d9d8 WatchSource:0}: Error finding container 65250481f952d1649d781d3a6eb31da6aaa397b79c96591b2018c7f9a006d9d8: Status 404 returned error can't find the container with id 65250481f952d1649d781d3a6eb31da6aaa397b79c96591b2018c7f9a006d9d8 Jan 29 06:53:11 crc kubenswrapper[5017]: W0129 06:53:11.409135 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3505bbb1_d190_470a_84e7_18e9b3330a2f.slice/crio-46ba0806091e0a50d2c34a14dea8866345f79ca6ad4fa723ba519bf349a73a27 WatchSource:0}: Error finding container 46ba0806091e0a50d2c34a14dea8866345f79ca6ad4fa723ba519bf349a73a27: Status 404 returned error can't find the container with id 46ba0806091e0a50d2c34a14dea8866345f79ca6ad4fa723ba519bf349a73a27 Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.410204 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gmq8s"] Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.602188 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-01d7-account-create-update-bpmlf"] Jan 29 06:53:11 crc kubenswrapper[5017]: W0129 06:53:11.611912 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8600c4aa_101a_4803_b8c8_7313e2742c6c.slice/crio-96a4528a96d3b5aa0e70eed9b87fdd1ed62245e6fee9980daae0a8ce6bbe4a02 WatchSource:0}: Error finding container 96a4528a96d3b5aa0e70eed9b87fdd1ed62245e6fee9980daae0a8ce6bbe4a02: Status 404 returned error can't find the container with id 96a4528a96d3b5aa0e70eed9b87fdd1ed62245e6fee9980daae0a8ce6bbe4a02 Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.612767 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-t6sx7"] Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.643467 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5997q"] Jan 29 06:53:11 crc kubenswrapper[5017]: I0129 06:53:11.651925 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2d72-account-create-update-cdsn8"] Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.414771 5017 generic.go:334] "Generic (PLEG): container finished" podID="ea1b3f91-cc75-43b7-838d-837b273b3509" containerID="7524bbc873f647c6883ae3bbc0be1f236c76e6c7fba1d892c0445d6d6604fd55" exitCode=0 Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.415251 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-01d7-account-create-update-bpmlf" event={"ID":"ea1b3f91-cc75-43b7-838d-837b273b3509","Type":"ContainerDied","Data":"7524bbc873f647c6883ae3bbc0be1f236c76e6c7fba1d892c0445d6d6604fd55"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.415612 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-01d7-account-create-update-bpmlf" event={"ID":"ea1b3f91-cc75-43b7-838d-837b273b3509","Type":"ContainerStarted","Data":"e7ec26d9fcfe7e4167ce0af425b2eec51ee9aff88e357ee30c993562d9ea9a8a"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.422410 5017 generic.go:334] "Generic (PLEG): container finished" podID="3505bbb1-d190-470a-84e7-18e9b3330a2f" containerID="7ba63192b2f1d1f17f0843b49c0e7a26e36bba8db193c114f4b86dffcf9c4f57" exitCode=0 Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.422481 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gmq8s" event={"ID":"3505bbb1-d190-470a-84e7-18e9b3330a2f","Type":"ContainerDied","Data":"7ba63192b2f1d1f17f0843b49c0e7a26e36bba8db193c114f4b86dffcf9c4f57"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.422514 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gmq8s" event={"ID":"3505bbb1-d190-470a-84e7-18e9b3330a2f","Type":"ContainerStarted","Data":"46ba0806091e0a50d2c34a14dea8866345f79ca6ad4fa723ba519bf349a73a27"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.427649 5017 generic.go:334] "Generic (PLEG): container finished" podID="d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1" containerID="6db3967a16e3e221bcaee7e55539ba2cd58d8ae575785bf003c4558191c1cde0" exitCode=0 Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.427819 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2d72-account-create-update-cdsn8" event={"ID":"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1","Type":"ContainerDied","Data":"6db3967a16e3e221bcaee7e55539ba2cd58d8ae575785bf003c4558191c1cde0"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.427846 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2d72-account-create-update-cdsn8" event={"ID":"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1","Type":"ContainerStarted","Data":"525ef9ca6f9c0d0f5747b250c55a9af1e805a545fc5fc22e62bfa043c2b6ecd7"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.432939 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-np6m4" event={"ID":"b1e28709-36ce-4df4-8ec9-2ac9458b87da","Type":"ContainerStarted","Data":"65250481f952d1649d781d3a6eb31da6aaa397b79c96591b2018c7f9a006d9d8"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.435107 5017 generic.go:334] "Generic (PLEG): container finished" podID="66c78bbb-a5a9-45e4-8825-a4d05dfa23b3" containerID="c7c3ba3b05749f587e1869bb2967a2104143dfa655977bf2b117fd010ff26d71" exitCode=0 Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.435244 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5997q" event={"ID":"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3","Type":"ContainerDied","Data":"c7c3ba3b05749f587e1869bb2967a2104143dfa655977bf2b117fd010ff26d71"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.435272 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5997q" event={"ID":"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3","Type":"ContainerStarted","Data":"d38590427b8f85dc4348c5b5e3786e9beb234b9a6d1d20c81c7826082aa6119b"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.460470 5017 generic.go:334] "Generic (PLEG): container finished" podID="8600c4aa-101a-4803-b8c8-7313e2742c6c" containerID="05d82a67cf4b6d0a7b95e554fe5b8dc77f21d424eec76141ed3b8b739537b1dc" exitCode=0 Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.460568 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t6sx7" event={"ID":"8600c4aa-101a-4803-b8c8-7313e2742c6c","Type":"ContainerDied","Data":"05d82a67cf4b6d0a7b95e554fe5b8dc77f21d424eec76141ed3b8b739537b1dc"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.460608 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t6sx7" event={"ID":"8600c4aa-101a-4803-b8c8-7313e2742c6c","Type":"ContainerStarted","Data":"96a4528a96d3b5aa0e70eed9b87fdd1ed62245e6fee9980daae0a8ce6bbe4a02"} Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.464061 5017 generic.go:334] "Generic (PLEG): container finished" podID="2179f134-a047-4000-b58b-4755df9f56b7" containerID="8045cd391fa98d128b172401a18c6cadab6e557db29267f45341f19f2caf8280" exitCode=0 Jan 29 06:53:12 crc kubenswrapper[5017]: I0129 06:53:12.464171 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-381c-account-create-update-d4zjm" event={"ID":"2179f134-a047-4000-b58b-4755df9f56b7","Type":"ContainerDied","Data":"8045cd391fa98d128b172401a18c6cadab6e557db29267f45341f19f2caf8280"} Jan 29 06:53:13 crc kubenswrapper[5017]: I0129 06:53:13.476862 5017 generic.go:334] "Generic (PLEG): container finished" podID="c06828d9-6c4d-4228-adc6-3788f22ae732" containerID="ad39d19c0eae6d091513b0dda8def0c81ac302c26687b45f9f7175e673527e1c" exitCode=0 Jan 29 06:53:13 crc kubenswrapper[5017]: I0129 06:53:13.477028 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2qnw2" event={"ID":"c06828d9-6c4d-4228-adc6-3788f22ae732","Type":"ContainerDied","Data":"ad39d19c0eae6d091513b0dda8def0c81ac302c26687b45f9f7175e673527e1c"} Jan 29 06:53:13 crc kubenswrapper[5017]: I0129 06:53:13.484867 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c"} Jan 29 06:53:13 crc kubenswrapper[5017]: I0129 06:53:13.484942 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35"} Jan 29 06:53:13 crc kubenswrapper[5017]: I0129 06:53:13.485024 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb"} Jan 29 06:53:13 crc kubenswrapper[5017]: I0129 06:53:13.485048 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e"} Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.105361 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.112009 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5997q" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.156632 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.177351 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.187711 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.201440 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhx2t\" (UniqueName: \"kubernetes.io/projected/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-kube-api-access-nhx2t\") pod \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\" (UID: \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.201512 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5g75\" (UniqueName: \"kubernetes.io/projected/3505bbb1-d190-470a-84e7-18e9b3330a2f-kube-api-access-f5g75\") pod \"3505bbb1-d190-470a-84e7-18e9b3330a2f\" (UID: \"3505bbb1-d190-470a-84e7-18e9b3330a2f\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.201559 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3505bbb1-d190-470a-84e7-18e9b3330a2f-operator-scripts\") pod \"3505bbb1-d190-470a-84e7-18e9b3330a2f\" (UID: \"3505bbb1-d190-470a-84e7-18e9b3330a2f\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.201727 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-operator-scripts\") pod \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\" (UID: \"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.202884 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66c78bbb-a5a9-45e4-8825-a4d05dfa23b3" (UID: "66c78bbb-a5a9-45e4-8825-a4d05dfa23b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.204416 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3505bbb1-d190-470a-84e7-18e9b3330a2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3505bbb1-d190-470a-84e7-18e9b3330a2f" (UID: "3505bbb1-d190-470a-84e7-18e9b3330a2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.209828 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3505bbb1-d190-470a-84e7-18e9b3330a2f-kube-api-access-f5g75" (OuterVolumeSpecName: "kube-api-access-f5g75") pod "3505bbb1-d190-470a-84e7-18e9b3330a2f" (UID: "3505bbb1-d190-470a-84e7-18e9b3330a2f"). InnerVolumeSpecName "kube-api-access-f5g75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.210026 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.211303 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-kube-api-access-nhx2t" (OuterVolumeSpecName: "kube-api-access-nhx2t") pod "66c78bbb-a5a9-45e4-8825-a4d05dfa23b3" (UID: "66c78bbb-a5a9-45e4-8825-a4d05dfa23b3"). InnerVolumeSpecName "kube-api-access-nhx2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.211350 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2qnw2" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303200 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hlx8\" (UniqueName: \"kubernetes.io/projected/ea1b3f91-cc75-43b7-838d-837b273b3509-kube-api-access-5hlx8\") pod \"ea1b3f91-cc75-43b7-838d-837b273b3509\" (UID: \"ea1b3f91-cc75-43b7-838d-837b273b3509\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303282 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-db-sync-config-data\") pod \"c06828d9-6c4d-4228-adc6-3788f22ae732\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303332 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hgdz\" (UniqueName: \"kubernetes.io/projected/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-kube-api-access-4hgdz\") pod \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\" (UID: \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303388 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-config-data\") pod \"c06828d9-6c4d-4228-adc6-3788f22ae732\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303425 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjjxj\" (UniqueName: \"kubernetes.io/projected/8600c4aa-101a-4803-b8c8-7313e2742c6c-kube-api-access-vjjxj\") pod \"8600c4aa-101a-4803-b8c8-7313e2742c6c\" (UID: \"8600c4aa-101a-4803-b8c8-7313e2742c6c\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303482 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1b3f91-cc75-43b7-838d-837b273b3509-operator-scripts\") pod \"ea1b3f91-cc75-43b7-838d-837b273b3509\" (UID: \"ea1b3f91-cc75-43b7-838d-837b273b3509\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303513 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp5q7\" (UniqueName: \"kubernetes.io/projected/2179f134-a047-4000-b58b-4755df9f56b7-kube-api-access-dp5q7\") pod \"2179f134-a047-4000-b58b-4755df9f56b7\" (UID: \"2179f134-a047-4000-b58b-4755df9f56b7\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303577 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2179f134-a047-4000-b58b-4755df9f56b7-operator-scripts\") pod \"2179f134-a047-4000-b58b-4755df9f56b7\" (UID: \"2179f134-a047-4000-b58b-4755df9f56b7\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303609 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8600c4aa-101a-4803-b8c8-7313e2742c6c-operator-scripts\") pod \"8600c4aa-101a-4803-b8c8-7313e2742c6c\" (UID: \"8600c4aa-101a-4803-b8c8-7313e2742c6c\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303664 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-combined-ca-bundle\") pod \"c06828d9-6c4d-4228-adc6-3788f22ae732\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303748 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8jgl\" (UniqueName: \"kubernetes.io/projected/c06828d9-6c4d-4228-adc6-3788f22ae732-kube-api-access-v8jgl\") pod \"c06828d9-6c4d-4228-adc6-3788f22ae732\" (UID: \"c06828d9-6c4d-4228-adc6-3788f22ae732\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.303810 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-operator-scripts\") pod \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\" (UID: \"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1\") " Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.304213 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2179f134-a047-4000-b58b-4755df9f56b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2179f134-a047-4000-b58b-4755df9f56b7" (UID: "2179f134-a047-4000-b58b-4755df9f56b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.304251 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.304433 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhx2t\" (UniqueName: \"kubernetes.io/projected/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3-kube-api-access-nhx2t\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.304648 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5g75\" (UniqueName: \"kubernetes.io/projected/3505bbb1-d190-470a-84e7-18e9b3330a2f-kube-api-access-f5g75\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.304715 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3505bbb1-d190-470a-84e7-18e9b3330a2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.304756 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1" (UID: "d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.305127 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea1b3f91-cc75-43b7-838d-837b273b3509-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea1b3f91-cc75-43b7-838d-837b273b3509" (UID: "ea1b3f91-cc75-43b7-838d-837b273b3509"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.306213 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8600c4aa-101a-4803-b8c8-7313e2742c6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8600c4aa-101a-4803-b8c8-7313e2742c6c" (UID: "8600c4aa-101a-4803-b8c8-7313e2742c6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.309650 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c06828d9-6c4d-4228-adc6-3788f22ae732" (UID: "c06828d9-6c4d-4228-adc6-3788f22ae732"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.310555 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2179f134-a047-4000-b58b-4755df9f56b7-kube-api-access-dp5q7" (OuterVolumeSpecName: "kube-api-access-dp5q7") pod "2179f134-a047-4000-b58b-4755df9f56b7" (UID: "2179f134-a047-4000-b58b-4755df9f56b7"). InnerVolumeSpecName "kube-api-access-dp5q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.311367 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1b3f91-cc75-43b7-838d-837b273b3509-kube-api-access-5hlx8" (OuterVolumeSpecName: "kube-api-access-5hlx8") pod "ea1b3f91-cc75-43b7-838d-837b273b3509" (UID: "ea1b3f91-cc75-43b7-838d-837b273b3509"). InnerVolumeSpecName "kube-api-access-5hlx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.312416 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8600c4aa-101a-4803-b8c8-7313e2742c6c-kube-api-access-vjjxj" (OuterVolumeSpecName: "kube-api-access-vjjxj") pod "8600c4aa-101a-4803-b8c8-7313e2742c6c" (UID: "8600c4aa-101a-4803-b8c8-7313e2742c6c"). InnerVolumeSpecName "kube-api-access-vjjxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.313525 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-kube-api-access-4hgdz" (OuterVolumeSpecName: "kube-api-access-4hgdz") pod "d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1" (UID: "d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1"). InnerVolumeSpecName "kube-api-access-4hgdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.314226 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06828d9-6c4d-4228-adc6-3788f22ae732-kube-api-access-v8jgl" (OuterVolumeSpecName: "kube-api-access-v8jgl") pod "c06828d9-6c4d-4228-adc6-3788f22ae732" (UID: "c06828d9-6c4d-4228-adc6-3788f22ae732"). InnerVolumeSpecName "kube-api-access-v8jgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.336504 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c06828d9-6c4d-4228-adc6-3788f22ae732" (UID: "c06828d9-6c4d-4228-adc6-3788f22ae732"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.374367 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-config-data" (OuterVolumeSpecName: "config-data") pod "c06828d9-6c4d-4228-adc6-3788f22ae732" (UID: "c06828d9-6c4d-4228-adc6-3788f22ae732"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406248 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hgdz\" (UniqueName: \"kubernetes.io/projected/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-kube-api-access-4hgdz\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406290 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406302 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjjxj\" (UniqueName: \"kubernetes.io/projected/8600c4aa-101a-4803-b8c8-7313e2742c6c-kube-api-access-vjjxj\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406314 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea1b3f91-cc75-43b7-838d-837b273b3509-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406328 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp5q7\" (UniqueName: \"kubernetes.io/projected/2179f134-a047-4000-b58b-4755df9f56b7-kube-api-access-dp5q7\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406339 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2179f134-a047-4000-b58b-4755df9f56b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406351 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8600c4aa-101a-4803-b8c8-7313e2742c6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406361 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406371 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8jgl\" (UniqueName: \"kubernetes.io/projected/c06828d9-6c4d-4228-adc6-3788f22ae732-kube-api-access-v8jgl\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406381 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406395 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hlx8\" (UniqueName: \"kubernetes.io/projected/ea1b3f91-cc75-43b7-838d-837b273b3509-kube-api-access-5hlx8\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.406406 5017 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c06828d9-6c4d-4228-adc6-3788f22ae732-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.562429 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-np6m4" event={"ID":"b1e28709-36ce-4df4-8ec9-2ac9458b87da","Type":"ContainerStarted","Data":"fc8181badbf22d0a0ca2a38752e9a72d9fc5dfd63d92985c7fd235fc34836d5a"} Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.580595 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5997q" event={"ID":"66c78bbb-a5a9-45e4-8825-a4d05dfa23b3","Type":"ContainerDied","Data":"d38590427b8f85dc4348c5b5e3786e9beb234b9a6d1d20c81c7826082aa6119b"} Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.580654 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d38590427b8f85dc4348c5b5e3786e9beb234b9a6d1d20c81c7826082aa6119b" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.580866 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5997q" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.594831 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t6sx7" event={"ID":"8600c4aa-101a-4803-b8c8-7313e2742c6c","Type":"ContainerDied","Data":"96a4528a96d3b5aa0e70eed9b87fdd1ed62245e6fee9980daae0a8ce6bbe4a02"} Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.594876 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96a4528a96d3b5aa0e70eed9b87fdd1ed62245e6fee9980daae0a8ce6bbe4a02" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.594948 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t6sx7" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.595949 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-np6m4" podStartSLOduration=5.086206804 podStartE2EDuration="10.59592699s" podCreationTimestamp="2026-01-29 06:53:07 +0000 UTC" firstStartedPulling="2026-01-29 06:53:11.413698094 +0000 UTC m=+1077.788145704" lastFinishedPulling="2026-01-29 06:53:16.92341828 +0000 UTC m=+1083.297865890" observedRunningTime="2026-01-29 06:53:17.589582184 +0000 UTC m=+1083.964029794" watchObservedRunningTime="2026-01-29 06:53:17.59592699 +0000 UTC m=+1083.970374590" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.615977 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gmq8s" event={"ID":"3505bbb1-d190-470a-84e7-18e9b3330a2f","Type":"ContainerDied","Data":"46ba0806091e0a50d2c34a14dea8866345f79ca6ad4fa723ba519bf349a73a27"} Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.616003 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gmq8s" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.616020 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46ba0806091e0a50d2c34a14dea8866345f79ca6ad4fa723ba519bf349a73a27" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.630909 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-01d7-account-create-update-bpmlf" event={"ID":"ea1b3f91-cc75-43b7-838d-837b273b3509","Type":"ContainerDied","Data":"e7ec26d9fcfe7e4167ce0af425b2eec51ee9aff88e357ee30c993562d9ea9a8a"} Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.630971 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ec26d9fcfe7e4167ce0af425b2eec51ee9aff88e357ee30c993562d9ea9a8a" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.631041 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-01d7-account-create-update-bpmlf" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.641323 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2qnw2" event={"ID":"c06828d9-6c4d-4228-adc6-3788f22ae732","Type":"ContainerDied","Data":"22e58967521156eea427d7e0c0c5d6385477761824e6da4b0838f191913aaa57"} Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.641361 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e58967521156eea427d7e0c0c5d6385477761824e6da4b0838f191913aaa57" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.641421 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2qnw2" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.662533 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2d72-account-create-update-cdsn8" event={"ID":"d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1","Type":"ContainerDied","Data":"525ef9ca6f9c0d0f5747b250c55a9af1e805a545fc5fc22e62bfa043c2b6ecd7"} Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.662576 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="525ef9ca6f9c0d0f5747b250c55a9af1e805a545fc5fc22e62bfa043c2b6ecd7" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.662668 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d72-account-create-update-cdsn8" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.672746 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-381c-account-create-update-d4zjm" event={"ID":"2179f134-a047-4000-b58b-4755df9f56b7","Type":"ContainerDied","Data":"a094ae5c3e6508bb7c6c76f4dd0f735ab36bae9ee243bd954a1bdeae7e8886b7"} Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.672799 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a094ae5c3e6508bb7c6c76f4dd0f735ab36bae9ee243bd954a1bdeae7e8886b7" Jan 29 06:53:17 crc kubenswrapper[5017]: I0129 06:53:17.672871 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-381c-account-create-update-d4zjm" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.724872 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bfd654465-8mrhp"] Jan 29 06:53:18 crc kubenswrapper[5017]: E0129 06:53:18.725746 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c78bbb-a5a9-45e4-8825-a4d05dfa23b3" containerName="mariadb-database-create" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.725827 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c78bbb-a5a9-45e4-8825-a4d05dfa23b3" containerName="mariadb-database-create" Jan 29 06:53:18 crc kubenswrapper[5017]: E0129 06:53:18.725848 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06828d9-6c4d-4228-adc6-3788f22ae732" containerName="glance-db-sync" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.725857 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06828d9-6c4d-4228-adc6-3788f22ae732" containerName="glance-db-sync" Jan 29 06:53:18 crc kubenswrapper[5017]: E0129 06:53:18.725873 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2179f134-a047-4000-b58b-4755df9f56b7" containerName="mariadb-account-create-update" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.725879 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2179f134-a047-4000-b58b-4755df9f56b7" containerName="mariadb-account-create-update" Jan 29 06:53:18 crc kubenswrapper[5017]: E0129 06:53:18.725896 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1b3f91-cc75-43b7-838d-837b273b3509" containerName="mariadb-account-create-update" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.725902 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1b3f91-cc75-43b7-838d-837b273b3509" containerName="mariadb-account-create-update" Jan 29 06:53:18 crc kubenswrapper[5017]: E0129 06:53:18.725911 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8600c4aa-101a-4803-b8c8-7313e2742c6c" containerName="mariadb-database-create" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.725918 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8600c4aa-101a-4803-b8c8-7313e2742c6c" containerName="mariadb-database-create" Jan 29 06:53:18 crc kubenswrapper[5017]: E0129 06:53:18.725938 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3505bbb1-d190-470a-84e7-18e9b3330a2f" containerName="mariadb-database-create" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.725944 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3505bbb1-d190-470a-84e7-18e9b3330a2f" containerName="mariadb-database-create" Jan 29 06:53:18 crc kubenswrapper[5017]: E0129 06:53:18.725990 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1" containerName="mariadb-account-create-update" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.725996 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1" containerName="mariadb-account-create-update" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.726172 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8600c4aa-101a-4803-b8c8-7313e2742c6c" containerName="mariadb-database-create" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.726187 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c78bbb-a5a9-45e4-8825-a4d05dfa23b3" containerName="mariadb-database-create" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.726197 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3505bbb1-d190-470a-84e7-18e9b3330a2f" containerName="mariadb-database-create" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.726210 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06828d9-6c4d-4228-adc6-3788f22ae732" containerName="glance-db-sync" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.726218 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1" containerName="mariadb-account-create-update" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.726226 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1b3f91-cc75-43b7-838d-837b273b3509" containerName="mariadb-account-create-update" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.726236 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="2179f134-a047-4000-b58b-4755df9f56b7" containerName="mariadb-account-create-update" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.727170 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.761265 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bfd654465-8mrhp"] Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.836764 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-sb\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.837028 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmrj\" (UniqueName: \"kubernetes.io/projected/a0583a3e-d131-40df-b122-39cb73579f76-kube-api-access-pgmrj\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.837207 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-dns-svc\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.837276 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-config\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.837346 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-nb\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.841539 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1"} Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.841589 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39"} Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.841600 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40"} Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.939177 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-sb\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.939320 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgmrj\" (UniqueName: \"kubernetes.io/projected/a0583a3e-d131-40df-b122-39cb73579f76-kube-api-access-pgmrj\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.939380 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-dns-svc\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.939402 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-config\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.939425 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-nb\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.940442 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-sb\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.940533 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-dns-svc\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.940616 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-nb\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.940736 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-config\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:18 crc kubenswrapper[5017]: I0129 06:53:18.981170 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgmrj\" (UniqueName: \"kubernetes.io/projected/a0583a3e-d131-40df-b122-39cb73579f76-kube-api-access-pgmrj\") pod \"dnsmasq-dns-6bfd654465-8mrhp\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:19 crc kubenswrapper[5017]: I0129 06:53:19.074211 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:19 crc kubenswrapper[5017]: I0129 06:53:19.540674 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bfd654465-8mrhp"] Jan 29 06:53:19 crc kubenswrapper[5017]: I0129 06:53:19.852436 5017 generic.go:334] "Generic (PLEG): container finished" podID="a0583a3e-d131-40df-b122-39cb73579f76" containerID="1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4" exitCode=0 Jan 29 06:53:19 crc kubenswrapper[5017]: I0129 06:53:19.853096 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" event={"ID":"a0583a3e-d131-40df-b122-39cb73579f76","Type":"ContainerDied","Data":"1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4"} Jan 29 06:53:19 crc kubenswrapper[5017]: I0129 06:53:19.853130 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" event={"ID":"a0583a3e-d131-40df-b122-39cb73579f76","Type":"ContainerStarted","Data":"671d4358cf586127785706f0f87addc92c6c6d3edf9363ff45b5ccea1684647c"} Jan 29 06:53:19 crc kubenswrapper[5017]: I0129 06:53:19.877740 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45"} Jan 29 06:53:19 crc kubenswrapper[5017]: I0129 06:53:19.877820 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684"} Jan 29 06:53:19 crc kubenswrapper[5017]: I0129 06:53:19.877833 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0"} Jan 29 06:53:20 crc kubenswrapper[5017]: I0129 06:53:20.888634 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" event={"ID":"a0583a3e-d131-40df-b122-39cb73579f76","Type":"ContainerStarted","Data":"9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93"} Jan 29 06:53:20 crc kubenswrapper[5017]: I0129 06:53:20.888986 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:20 crc kubenswrapper[5017]: I0129 06:53:20.896659 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerStarted","Data":"a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345"} Jan 29 06:53:20 crc kubenswrapper[5017]: I0129 06:53:20.914840 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" podStartSLOduration=2.914813547 podStartE2EDuration="2.914813547s" podCreationTimestamp="2026-01-29 06:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:20.913370912 +0000 UTC m=+1087.287818542" watchObservedRunningTime="2026-01-29 06:53:20.914813547 +0000 UTC m=+1087.289261157" Jan 29 06:53:20 crc kubenswrapper[5017]: I0129 06:53:20.958567 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.686250738 podStartE2EDuration="50.958542539s" podCreationTimestamp="2026-01-29 06:52:30 +0000 UTC" firstStartedPulling="2026-01-29 06:53:04.575263338 +0000 UTC m=+1070.949710948" lastFinishedPulling="2026-01-29 06:53:17.847555149 +0000 UTC m=+1084.222002749" observedRunningTime="2026-01-29 06:53:20.951726533 +0000 UTC m=+1087.326174153" watchObservedRunningTime="2026-01-29 06:53:20.958542539 +0000 UTC m=+1087.332990149" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.261313 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bfd654465-8mrhp"] Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.296592 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-4jvlb"] Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.299019 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.302039 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.311737 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-4jvlb"] Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.393941 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjhln\" (UniqueName: \"kubernetes.io/projected/b51f16b0-5c41-45c9-b558-4487b8b05874-kube-api-access-zjhln\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.393997 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.394032 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.394076 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.394096 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-config\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.394125 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.495849 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjhln\" (UniqueName: \"kubernetes.io/projected/b51f16b0-5c41-45c9-b558-4487b8b05874-kube-api-access-zjhln\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.495907 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.495941 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.496004 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.496029 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-config\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.496062 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.496998 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-config\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.496998 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.497193 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.497202 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.497594 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.526226 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjhln\" (UniqueName: \"kubernetes.io/projected/b51f16b0-5c41-45c9-b558-4487b8b05874-kube-api-access-zjhln\") pod \"dnsmasq-dns-74dfc89d77-4jvlb\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.621725 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.907295 5017 generic.go:334] "Generic (PLEG): container finished" podID="b1e28709-36ce-4df4-8ec9-2ac9458b87da" containerID="fc8181badbf22d0a0ca2a38752e9a72d9fc5dfd63d92985c7fd235fc34836d5a" exitCode=0 Jan 29 06:53:21 crc kubenswrapper[5017]: I0129 06:53:21.908541 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-np6m4" event={"ID":"b1e28709-36ce-4df4-8ec9-2ac9458b87da","Type":"ContainerDied","Data":"fc8181badbf22d0a0ca2a38752e9a72d9fc5dfd63d92985c7fd235fc34836d5a"} Jan 29 06:53:22 crc kubenswrapper[5017]: I0129 06:53:22.078165 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-4jvlb"] Jan 29 06:53:22 crc kubenswrapper[5017]: W0129 06:53:22.084889 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb51f16b0_5c41_45c9_b558_4487b8b05874.slice/crio-c05fa98a6c2c5e210a820812a88af2c8b373f4ce1b6717cfda517552c749a42c WatchSource:0}: Error finding container c05fa98a6c2c5e210a820812a88af2c8b373f4ce1b6717cfda517552c749a42c: Status 404 returned error can't find the container with id c05fa98a6c2c5e210a820812a88af2c8b373f4ce1b6717cfda517552c749a42c Jan 29 06:53:22 crc kubenswrapper[5017]: I0129 06:53:22.919408 5017 generic.go:334] "Generic (PLEG): container finished" podID="b51f16b0-5c41-45c9-b558-4487b8b05874" containerID="26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e" exitCode=0 Jan 29 06:53:22 crc kubenswrapper[5017]: I0129 06:53:22.919519 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" event={"ID":"b51f16b0-5c41-45c9-b558-4487b8b05874","Type":"ContainerDied","Data":"26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e"} Jan 29 06:53:22 crc kubenswrapper[5017]: I0129 06:53:22.920003 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" event={"ID":"b51f16b0-5c41-45c9-b558-4487b8b05874","Type":"ContainerStarted","Data":"c05fa98a6c2c5e210a820812a88af2c8b373f4ce1b6717cfda517552c749a42c"} Jan 29 06:53:22 crc kubenswrapper[5017]: I0129 06:53:22.920259 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" podUID="a0583a3e-d131-40df-b122-39cb73579f76" containerName="dnsmasq-dns" containerID="cri-o://9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93" gracePeriod=10 Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.276861 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.329012 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-config-data\") pod \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.329296 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-combined-ca-bundle\") pod \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.329329 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxmbq\" (UniqueName: \"kubernetes.io/projected/b1e28709-36ce-4df4-8ec9-2ac9458b87da-kube-api-access-fxmbq\") pod \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\" (UID: \"b1e28709-36ce-4df4-8ec9-2ac9458b87da\") " Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.334701 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e28709-36ce-4df4-8ec9-2ac9458b87da-kube-api-access-fxmbq" (OuterVolumeSpecName: "kube-api-access-fxmbq") pod "b1e28709-36ce-4df4-8ec9-2ac9458b87da" (UID: "b1e28709-36ce-4df4-8ec9-2ac9458b87da"). InnerVolumeSpecName "kube-api-access-fxmbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.369596 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1e28709-36ce-4df4-8ec9-2ac9458b87da" (UID: "b1e28709-36ce-4df4-8ec9-2ac9458b87da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.374098 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-config-data" (OuterVolumeSpecName: "config-data") pod "b1e28709-36ce-4df4-8ec9-2ac9458b87da" (UID: "b1e28709-36ce-4df4-8ec9-2ac9458b87da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.398178 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.431754 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.431796 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e28709-36ce-4df4-8ec9-2ac9458b87da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.431811 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxmbq\" (UniqueName: \"kubernetes.io/projected/b1e28709-36ce-4df4-8ec9-2ac9458b87da-kube-api-access-fxmbq\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.535037 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgmrj\" (UniqueName: \"kubernetes.io/projected/a0583a3e-d131-40df-b122-39cb73579f76-kube-api-access-pgmrj\") pod \"a0583a3e-d131-40df-b122-39cb73579f76\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.535109 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-config\") pod \"a0583a3e-d131-40df-b122-39cb73579f76\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.535180 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-sb\") pod \"a0583a3e-d131-40df-b122-39cb73579f76\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.535358 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-nb\") pod \"a0583a3e-d131-40df-b122-39cb73579f76\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.535856 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-dns-svc\") pod \"a0583a3e-d131-40df-b122-39cb73579f76\" (UID: \"a0583a3e-d131-40df-b122-39cb73579f76\") " Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.540396 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0583a3e-d131-40df-b122-39cb73579f76-kube-api-access-pgmrj" (OuterVolumeSpecName: "kube-api-access-pgmrj") pod "a0583a3e-d131-40df-b122-39cb73579f76" (UID: "a0583a3e-d131-40df-b122-39cb73579f76"). InnerVolumeSpecName "kube-api-access-pgmrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.581005 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-config" (OuterVolumeSpecName: "config") pod "a0583a3e-d131-40df-b122-39cb73579f76" (UID: "a0583a3e-d131-40df-b122-39cb73579f76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.583296 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a0583a3e-d131-40df-b122-39cb73579f76" (UID: "a0583a3e-d131-40df-b122-39cb73579f76"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.584680 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0583a3e-d131-40df-b122-39cb73579f76" (UID: "a0583a3e-d131-40df-b122-39cb73579f76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.607004 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a0583a3e-d131-40df-b122-39cb73579f76" (UID: "a0583a3e-d131-40df-b122-39cb73579f76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.639004 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgmrj\" (UniqueName: \"kubernetes.io/projected/a0583a3e-d131-40df-b122-39cb73579f76-kube-api-access-pgmrj\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.639356 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.639373 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.639382 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.639397 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0583a3e-d131-40df-b122-39cb73579f76-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.933570 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-np6m4" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.933575 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-np6m4" event={"ID":"b1e28709-36ce-4df4-8ec9-2ac9458b87da","Type":"ContainerDied","Data":"65250481f952d1649d781d3a6eb31da6aaa397b79c96591b2018c7f9a006d9d8"} Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.933659 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65250481f952d1649d781d3a6eb31da6aaa397b79c96591b2018c7f9a006d9d8" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.939281 5017 generic.go:334] "Generic (PLEG): container finished" podID="a0583a3e-d131-40df-b122-39cb73579f76" containerID="9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93" exitCode=0 Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.939432 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" event={"ID":"a0583a3e-d131-40df-b122-39cb73579f76","Type":"ContainerDied","Data":"9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93"} Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.939483 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" event={"ID":"a0583a3e-d131-40df-b122-39cb73579f76","Type":"ContainerDied","Data":"671d4358cf586127785706f0f87addc92c6c6d3edf9363ff45b5ccea1684647c"} Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.939492 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bfd654465-8mrhp" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.939542 5017 scope.go:117] "RemoveContainer" containerID="9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.944435 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" event={"ID":"b51f16b0-5c41-45c9-b558-4487b8b05874","Type":"ContainerStarted","Data":"cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97"} Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.945100 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.985393 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" podStartSLOduration=2.985369786 podStartE2EDuration="2.985369786s" podCreationTimestamp="2026-01-29 06:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:23.9781782 +0000 UTC m=+1090.352625830" watchObservedRunningTime="2026-01-29 06:53:23.985369786 +0000 UTC m=+1090.359817396" Jan 29 06:53:23 crc kubenswrapper[5017]: I0129 06:53:23.989144 5017 scope.go:117] "RemoveContainer" containerID="1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.009892 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bfd654465-8mrhp"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.017018 5017 scope.go:117] "RemoveContainer" containerID="9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93" Jan 29 06:53:24 crc kubenswrapper[5017]: E0129 06:53:24.017650 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93\": container with ID starting with 9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93 not found: ID does not exist" containerID="9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.017693 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93"} err="failed to get container status \"9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93\": rpc error: code = NotFound desc = could not find container \"9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93\": container with ID starting with 9b166609f4efc0f2bf25857ddfa4a7f61072df5328c242a1252a0b16887d7d93 not found: ID does not exist" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.017725 5017 scope.go:117] "RemoveContainer" containerID="1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4" Jan 29 06:53:24 crc kubenswrapper[5017]: E0129 06:53:24.018503 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4\": container with ID starting with 1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4 not found: ID does not exist" containerID="1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.018537 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4"} err="failed to get container status \"1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4\": rpc error: code = NotFound desc = could not find container \"1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4\": container with ID starting with 1c205268a19051c0297b504fcfe7be6dcc5b334c659dae38b0e81ea43bda92c4 not found: ID does not exist" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.020481 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bfd654465-8mrhp"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.256797 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-4jvlb"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.280349 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rksqd"] Jan 29 06:53:24 crc kubenswrapper[5017]: E0129 06:53:24.280813 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0583a3e-d131-40df-b122-39cb73579f76" containerName="init" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.280835 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0583a3e-d131-40df-b122-39cb73579f76" containerName="init" Jan 29 06:53:24 crc kubenswrapper[5017]: E0129 06:53:24.280849 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e28709-36ce-4df4-8ec9-2ac9458b87da" containerName="keystone-db-sync" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.280858 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e28709-36ce-4df4-8ec9-2ac9458b87da" containerName="keystone-db-sync" Jan 29 06:53:24 crc kubenswrapper[5017]: E0129 06:53:24.280885 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0583a3e-d131-40df-b122-39cb73579f76" containerName="dnsmasq-dns" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.280893 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0583a3e-d131-40df-b122-39cb73579f76" containerName="dnsmasq-dns" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.281081 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0583a3e-d131-40df-b122-39cb73579f76" containerName="dnsmasq-dns" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.281107 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e28709-36ce-4df4-8ec9-2ac9458b87da" containerName="keystone-db-sync" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.281722 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.287317 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rv2nk" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.287447 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.287550 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.287708 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rksqd"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.289845 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.290062 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.318064 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-2vhdr"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.319970 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.362212 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqhvv\" (UniqueName: \"kubernetes.io/projected/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-kube-api-access-zqhvv\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.362281 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-config-data\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.362339 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-scripts\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.362357 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-credential-keys\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.362392 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-fernet-keys\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.362444 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-combined-ca-bundle\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.372336 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0583a3e-d131-40df-b122-39cb73579f76" path="/var/lib/kubelet/pods/a0583a3e-d131-40df-b122-39cb73579f76/volumes" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.373051 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-2vhdr"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464133 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-scripts\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464181 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-credential-keys\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464221 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-fernet-keys\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464274 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464317 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95rm8\" (UniqueName: \"kubernetes.io/projected/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-kube-api-access-95rm8\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464347 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464387 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-combined-ca-bundle\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464418 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464463 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqhvv\" (UniqueName: \"kubernetes.io/projected/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-kube-api-access-zqhvv\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464483 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464517 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-config\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.464540 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-config-data\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.472128 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-fernet-keys\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.472343 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-scripts\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.472633 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-credential-keys\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.479145 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-config-data\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.487553 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-combined-ca-bundle\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.487634 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqhvv\" (UniqueName: \"kubernetes.io/projected/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-kube-api-access-zqhvv\") pod \"keystone-bootstrap-rksqd\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.567320 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-config\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.576637 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.576784 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95rm8\" (UniqueName: \"kubernetes.io/projected/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-kube-api-access-95rm8\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.576819 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.576867 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.576948 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.577893 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.570861 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-config\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.578531 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.579510 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.580063 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.580108 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-c99rc"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.590709 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.600881 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.602425 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xq7sw" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.602737 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.619504 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.647653 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95rm8\" (UniqueName: \"kubernetes.io/projected/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-kube-api-access-95rm8\") pod \"dnsmasq-dns-5fdbfbc95f-2vhdr\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.679919 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-q5xff"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.685608 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.685757 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-etc-machine-id\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.685838 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-db-sync-config-data\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.685879 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-scripts\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.685979 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-combined-ca-bundle\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.688740 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-config-data\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.688888 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdz4\" (UniqueName: \"kubernetes.io/projected/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-kube-api-access-6xdz4\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.724625 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-c99rc"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.724739 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.734322 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.734373 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7nc5b" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.734334 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.754129 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-q5xff"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.772978 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wjsjf"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.774744 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.778379 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8t8cw" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.778683 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.813423 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-combined-ca-bundle\") pod \"neutron-db-sync-q5xff\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.813484 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-etc-machine-id\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.813546 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-etc-machine-id\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.813707 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-db-sync-config-data\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.813824 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-scripts\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.813917 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-config\") pod \"neutron-db-sync-q5xff\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.814037 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-combined-ca-bundle\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.814450 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-config-data\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.814702 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xdz4\" (UniqueName: \"kubernetes.io/projected/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-kube-api-access-6xdz4\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.814821 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5t9\" (UniqueName: \"kubernetes.io/projected/7d79ee9c-086e-405e-a8d6-478823059f00-kube-api-access-ps5t9\") pod \"neutron-db-sync-q5xff\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.825625 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-db-sync-config-data\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.825743 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-combined-ca-bundle\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.829493 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-scripts\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.836769 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-config-data\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.871778 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wjsjf"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.905057 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xdz4\" (UniqueName: \"kubernetes.io/projected/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-kube-api-access-6xdz4\") pod \"cinder-db-sync-c99rc\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.916856 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps5t9\" (UniqueName: \"kubernetes.io/projected/7d79ee9c-086e-405e-a8d6-478823059f00-kube-api-access-ps5t9\") pod \"neutron-db-sync-q5xff\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.916913 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-combined-ca-bundle\") pod \"barbican-db-sync-wjsjf\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.916985 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-combined-ca-bundle\") pod \"neutron-db-sync-q5xff\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.917037 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-config\") pod \"neutron-db-sync-q5xff\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.917058 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6jt\" (UniqueName: \"kubernetes.io/projected/af5eea12-20bf-45b6-b989-f77529ea2f04-kube-api-access-8w6jt\") pod \"barbican-db-sync-wjsjf\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.917080 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-db-sync-config-data\") pod \"barbican-db-sync-wjsjf\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.921629 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.924274 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.939804 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.940494 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.960986 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-combined-ca-bundle\") pod \"neutron-db-sync-q5xff\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.963591 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-config\") pod \"neutron-db-sync-q5xff\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.968385 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.982028 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-2vhdr"] Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.982818 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c99rc" Jan 29 06:53:24 crc kubenswrapper[5017]: I0129 06:53:24.989251 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps5t9\" (UniqueName: \"kubernetes.io/projected/7d79ee9c-086e-405e-a8d6-478823059f00-kube-api-access-ps5t9\") pod \"neutron-db-sync-q5xff\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.007059 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-tgzgj"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.008744 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.016055 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-tgzgj"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.021097 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.021371 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-run-httpd\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.021565 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.021658 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6jt\" (UniqueName: \"kubernetes.io/projected/af5eea12-20bf-45b6-b989-f77529ea2f04-kube-api-access-8w6jt\") pod \"barbican-db-sync-wjsjf\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.021727 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-config-data\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.021799 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-scripts\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.027436 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-db-sync-config-data\") pod \"barbican-db-sync-wjsjf\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.027879 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-log-httpd\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.028004 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-combined-ca-bundle\") pod \"barbican-db-sync-wjsjf\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.028095 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrn8\" (UniqueName: \"kubernetes.io/projected/68d01bb7-534e-47c7-854c-c96384ad8df4-kube-api-access-rfrn8\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.041404 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-db-sync-config-data\") pod \"barbican-db-sync-wjsjf\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.050180 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-combined-ca-bundle\") pod \"barbican-db-sync-wjsjf\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.078335 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tmkrp"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.079908 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.082821 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6jt\" (UniqueName: \"kubernetes.io/projected/af5eea12-20bf-45b6-b989-f77529ea2f04-kube-api-access-8w6jt\") pod \"barbican-db-sync-wjsjf\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.098049 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tmkrp"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.107095 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.107584 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.108033 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wtn27" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.132769 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.132851 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-run-httpd\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.132875 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.132894 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-config-data\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.132918 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-scripts\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.133007 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.133072 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-config\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.133089 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.133111 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpzrn\" (UniqueName: \"kubernetes.io/projected/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-kube-api-access-jpzrn\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.133157 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-log-httpd\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.133179 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfrn8\" (UniqueName: \"kubernetes.io/projected/68d01bb7-534e-47c7-854c-c96384ad8df4-kube-api-access-rfrn8\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.133212 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.133249 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.133738 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-run-httpd\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.141367 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-log-httpd\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.159260 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.160835 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-config-data\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.174487 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.192096 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-scripts\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.209025 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfrn8\" (UniqueName: \"kubernetes.io/projected/68d01bb7-534e-47c7-854c-c96384ad8df4-kube-api-access-rfrn8\") pod \"ceilometer-0\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.231503 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241434 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241470 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241503 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-scripts\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241546 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cc3448b-f305-47b7-b2f9-32b61477ac21-logs\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241563 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnqp\" (UniqueName: \"kubernetes.io/projected/3cc3448b-f305-47b7-b2f9-32b61477ac21-kube-api-access-5vnqp\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241582 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241631 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-config\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241649 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241665 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-config-data\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241682 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-combined-ca-bundle\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.241701 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpzrn\" (UniqueName: \"kubernetes.io/projected/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-kube-api-access-jpzrn\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.242864 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.243402 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.243942 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.246507 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.248728 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-config\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.256848 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.282720 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.283942 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpzrn\" (UniqueName: \"kubernetes.io/projected/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-kube-api-access-jpzrn\") pod \"dnsmasq-dns-6f6f8cb849-tgzgj\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.343381 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-scripts\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.343913 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cc3448b-f305-47b7-b2f9-32b61477ac21-logs\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.343940 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vnqp\" (UniqueName: \"kubernetes.io/projected/3cc3448b-f305-47b7-b2f9-32b61477ac21-kube-api-access-5vnqp\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.344013 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-config-data\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.344033 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-combined-ca-bundle\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.345026 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cc3448b-f305-47b7-b2f9-32b61477ac21-logs\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.347380 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-scripts\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.349431 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-config-data\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.353724 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-combined-ca-bundle\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.384968 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vnqp\" (UniqueName: \"kubernetes.io/projected/3cc3448b-f305-47b7-b2f9-32b61477ac21-kube-api-access-5vnqp\") pod \"placement-db-sync-tmkrp\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.409652 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.410752 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.415354 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.423921 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.424575 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5m2bw" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.424773 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.425053 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.425210 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.473233 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.480593 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.484249 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.490787 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.491136 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.494987 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550337 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550406 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4cpf\" (UniqueName: \"kubernetes.io/projected/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-kube-api-access-k4cpf\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550457 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550512 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550550 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550590 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550814 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550837 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-scripts\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550861 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd277\" (UniqueName: \"kubernetes.io/projected/ad2298ef-344e-4c91-8c8e-83d7254a24d7-kube-api-access-hd277\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550897 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.550974 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.551007 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.551044 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-config-data\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.551084 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.551168 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-logs\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.551329 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668140 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668190 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668222 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-config-data\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668255 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668271 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-logs\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668297 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668335 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668357 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4cpf\" (UniqueName: \"kubernetes.io/projected/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-kube-api-access-k4cpf\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668388 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668424 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668453 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668474 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668495 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668511 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-scripts\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668529 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd277\" (UniqueName: \"kubernetes.io/projected/ad2298ef-344e-4c91-8c8e-83d7254a24d7-kube-api-access-hd277\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668553 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.668934 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.669707 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.669752 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.670298 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.679446 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.690078 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.690245 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.692998 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.699854 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-logs\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.701080 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.702097 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.712523 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-scripts\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.728995 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-2vhdr"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.745085 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-config-data\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.753736 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd277\" (UniqueName: \"kubernetes.io/projected/ad2298ef-344e-4c91-8c8e-83d7254a24d7-kube-api-access-hd277\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.754627 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.756731 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.758335 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4cpf\" (UniqueName: \"kubernetes.io/projected/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-kube-api-access-k4cpf\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.764066 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.822657 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.897821 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-c99rc"] Jan 29 06:53:25 crc kubenswrapper[5017]: I0129 06:53:25.950801 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rksqd"] Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.025285 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c99rc" event={"ID":"31e5ea57-0c73-4c76-bbcb-6d3b665b6226","Type":"ContainerStarted","Data":"ef233d6afa94d0d61415e612f31f562704cb3ee6431670bbd5b96b9258cd885c"} Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.027155 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rksqd" event={"ID":"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d","Type":"ContainerStarted","Data":"13d3672c38a12267d4ac8a863dd4c6bd47e9db2161a656d291977509363a4f47"} Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.028883 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" event={"ID":"8dc605ff-16c6-4d30-9f9e-5ce2587b356b","Type":"ContainerStarted","Data":"49909de6870911097c44dd9451bec6d1ea5112722e75c929760ba1799173851d"} Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.029075 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" podUID="b51f16b0-5c41-45c9-b558-4487b8b05874" containerName="dnsmasq-dns" containerID="cri-o://cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97" gracePeriod=10 Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.046812 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.135631 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-q5xff"] Jan 29 06:53:26 crc kubenswrapper[5017]: W0129 06:53:26.334146 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf5eea12_20bf_45b6_b989_f77529ea2f04.slice/crio-61c28c89cb52f5d58983deca85a9c3b01059418b67e6ce0d6f64bfcc6566cbdf WatchSource:0}: Error finding container 61c28c89cb52f5d58983deca85a9c3b01059418b67e6ce0d6f64bfcc6566cbdf: Status 404 returned error can't find the container with id 61c28c89cb52f5d58983deca85a9c3b01059418b67e6ce0d6f64bfcc6566cbdf Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.339913 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.346355 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wjsjf"] Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.445366 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tmkrp"] Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.464498 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-tgzgj"] Jan 29 06:53:26 crc kubenswrapper[5017]: W0129 06:53:26.467360 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dfc8258_7043_4cfd_ac09_d9e481f12e9d.slice/crio-8e6098ea02dca90bb48f50fc8fd64698beb47dd1ce451e017e358aec3da250e7 WatchSource:0}: Error finding container 8e6098ea02dca90bb48f50fc8fd64698beb47dd1ce451e017e358aec3da250e7: Status 404 returned error can't find the container with id 8e6098ea02dca90bb48f50fc8fd64698beb47dd1ce451e017e358aec3da250e7 Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.539275 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.539783 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.793068 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:53:26 crc kubenswrapper[5017]: I0129 06:53:26.998249 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.011891 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.018031 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-config\") pod \"b51f16b0-5c41-45c9-b558-4487b8b05874\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.018205 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-swift-storage-0\") pod \"b51f16b0-5c41-45c9-b558-4487b8b05874\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.021546 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjhln\" (UniqueName: \"kubernetes.io/projected/b51f16b0-5c41-45c9-b558-4487b8b05874-kube-api-access-zjhln\") pod \"b51f16b0-5c41-45c9-b558-4487b8b05874\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.021605 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-nb\") pod \"b51f16b0-5c41-45c9-b558-4487b8b05874\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.023448 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-svc\") pod \"b51f16b0-5c41-45c9-b558-4487b8b05874\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.023749 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-sb\") pod \"b51f16b0-5c41-45c9-b558-4487b8b05874\" (UID: \"b51f16b0-5c41-45c9-b558-4487b8b05874\") " Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.031881 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51f16b0-5c41-45c9-b558-4487b8b05874-kube-api-access-zjhln" (OuterVolumeSpecName: "kube-api-access-zjhln") pod "b51f16b0-5c41-45c9-b558-4487b8b05874" (UID: "b51f16b0-5c41-45c9-b558-4487b8b05874"). InnerVolumeSpecName "kube-api-access-zjhln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.050237 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.079270 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q5xff" event={"ID":"7d79ee9c-086e-405e-a8d6-478823059f00","Type":"ContainerStarted","Data":"d610e404469ca76ec8329edcb79d636e2d0e6b2aa686e2a81c9c8111278f86cd"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.079404 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q5xff" event={"ID":"7d79ee9c-086e-405e-a8d6-478823059f00","Type":"ContainerStarted","Data":"02f54322aa96a68a67b48e92e7782eb38f981cbb47df2051b00eaec734db8b96"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.130397 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.132100 5017 generic.go:334] "Generic (PLEG): container finished" podID="3dfc8258-7043-4cfd-ac09-d9e481f12e9d" containerID="2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d" exitCode=0 Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.132255 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" event={"ID":"3dfc8258-7043-4cfd-ac09-d9e481f12e9d","Type":"ContainerDied","Data":"2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.132294 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" event={"ID":"3dfc8258-7043-4cfd-ac09-d9e481f12e9d","Type":"ContainerStarted","Data":"8e6098ea02dca90bb48f50fc8fd64698beb47dd1ce451e017e358aec3da250e7"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.143576 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjhln\" (UniqueName: \"kubernetes.io/projected/b51f16b0-5c41-45c9-b558-4487b8b05874-kube-api-access-zjhln\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.160859 5017 generic.go:334] "Generic (PLEG): container finished" podID="8dc605ff-16c6-4d30-9f9e-5ce2587b356b" containerID="4646929dc8bc8428c95270a26497f0c06fbb97caf3eb99f931896c7106128d7d" exitCode=0 Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.161173 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" event={"ID":"8dc605ff-16c6-4d30-9f9e-5ce2587b356b","Type":"ContainerDied","Data":"4646929dc8bc8428c95270a26497f0c06fbb97caf3eb99f931896c7106128d7d"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.177726 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b51f16b0-5c41-45c9-b558-4487b8b05874" (UID: "b51f16b0-5c41-45c9-b558-4487b8b05874"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.183754 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-q5xff" podStartSLOduration=3.183736039 podStartE2EDuration="3.183736039s" podCreationTimestamp="2026-01-29 06:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:27.130325419 +0000 UTC m=+1093.504773029" watchObservedRunningTime="2026-01-29 06:53:27.183736039 +0000 UTC m=+1093.558183649" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.188897 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.200649 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b51f16b0-5c41-45c9-b558-4487b8b05874" (UID: "b51f16b0-5c41-45c9-b558-4487b8b05874"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.201222 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-config" (OuterVolumeSpecName: "config") pod "b51f16b0-5c41-45c9-b558-4487b8b05874" (UID: "b51f16b0-5c41-45c9-b558-4487b8b05874"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.202414 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b51f16b0-5c41-45c9-b558-4487b8b05874" (UID: "b51f16b0-5c41-45c9-b558-4487b8b05874"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.236909 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerStarted","Data":"c016d1e0810fbefe9cd1515d777f4eaaf6962031d79318f13af8ca6b88098fdb"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.251440 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.251472 5017 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.251487 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.251499 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.251823 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tmkrp" event={"ID":"3cc3448b-f305-47b7-b2f9-32b61477ac21","Type":"ContainerStarted","Data":"7fd2e8f3ad7833e49965f38edb5ba2926bef07ecde903562be671bc910027370"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.253603 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wjsjf" event={"ID":"af5eea12-20bf-45b6-b989-f77529ea2f04","Type":"ContainerStarted","Data":"61c28c89cb52f5d58983deca85a9c3b01059418b67e6ce0d6f64bfcc6566cbdf"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.278283 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b51f16b0-5c41-45c9-b558-4487b8b05874" (UID: "b51f16b0-5c41-45c9-b558-4487b8b05874"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.288969 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad2298ef-344e-4c91-8c8e-83d7254a24d7","Type":"ContainerStarted","Data":"277f670ddf29d756bc4c5d6109462dcddf7dbf596d62e68f605e375f8e2f8d2f"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.353536 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51f16b0-5c41-45c9-b558-4487b8b05874-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.424456 5017 generic.go:334] "Generic (PLEG): container finished" podID="b51f16b0-5c41-45c9-b558-4487b8b05874" containerID="cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97" exitCode=0 Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.424588 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" event={"ID":"b51f16b0-5c41-45c9-b558-4487b8b05874","Type":"ContainerDied","Data":"cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.424629 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" event={"ID":"b51f16b0-5c41-45c9-b558-4487b8b05874","Type":"ContainerDied","Data":"c05fa98a6c2c5e210a820812a88af2c8b373f4ce1b6717cfda517552c749a42c"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.424651 5017 scope.go:117] "RemoveContainer" containerID="cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.424814 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-4jvlb" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.450299 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rksqd" event={"ID":"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d","Type":"ContainerStarted","Data":"ca433794f725d27e083742abb5dc11becd94156c42fefff51806909b06722939"} Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.517796 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rksqd" podStartSLOduration=3.517774749 podStartE2EDuration="3.517774749s" podCreationTimestamp="2026-01-29 06:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:27.511352702 +0000 UTC m=+1093.885800332" watchObservedRunningTime="2026-01-29 06:53:27.517774749 +0000 UTC m=+1093.892222359" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.602731 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-4jvlb"] Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.680111 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-4jvlb"] Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.729995 5017 scope.go:117] "RemoveContainer" containerID="26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.921351 5017 scope.go:117] "RemoveContainer" containerID="cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97" Jan 29 06:53:27 crc kubenswrapper[5017]: E0129 06:53:27.922065 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97\": container with ID starting with cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97 not found: ID does not exist" containerID="cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.922117 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97"} err="failed to get container status \"cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97\": rpc error: code = NotFound desc = could not find container \"cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97\": container with ID starting with cf3ad65962837ffb76c5e0d9d43f4d04d1b82e6ea1c4997fb0aa98585462fc97 not found: ID does not exist" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.922143 5017 scope.go:117] "RemoveContainer" containerID="26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e" Jan 29 06:53:27 crc kubenswrapper[5017]: E0129 06:53:27.922934 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e\": container with ID starting with 26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e not found: ID does not exist" containerID="26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e" Jan 29 06:53:27 crc kubenswrapper[5017]: I0129 06:53:27.922980 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e"} err="failed to get container status \"26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e\": rpc error: code = NotFound desc = could not find container \"26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e\": container with ID starting with 26c90e0e4cb48d07ad5a5af2a3851b59cfa81ca9d8c1d2704e9e22daf77ada7e not found: ID does not exist" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.032165 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.082317 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-nb\") pod \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.082367 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-config\") pod \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.082487 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-sb\") pod \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.082595 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95rm8\" (UniqueName: \"kubernetes.io/projected/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-kube-api-access-95rm8\") pod \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.082658 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-swift-storage-0\") pod \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.083078 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-svc\") pod \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.090586 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-kube-api-access-95rm8" (OuterVolumeSpecName: "kube-api-access-95rm8") pod "8dc605ff-16c6-4d30-9f9e-5ce2587b356b" (UID: "8dc605ff-16c6-4d30-9f9e-5ce2587b356b"). InnerVolumeSpecName "kube-api-access-95rm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.148304 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8dc605ff-16c6-4d30-9f9e-5ce2587b356b" (UID: "8dc605ff-16c6-4d30-9f9e-5ce2587b356b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.187662 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-config" (OuterVolumeSpecName: "config") pod "8dc605ff-16c6-4d30-9f9e-5ce2587b356b" (UID: "8dc605ff-16c6-4d30-9f9e-5ce2587b356b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.188323 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-config\") pod \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\" (UID: \"8dc605ff-16c6-4d30-9f9e-5ce2587b356b\") " Jan 29 06:53:28 crc kubenswrapper[5017]: W0129 06:53:28.188783 5017 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8dc605ff-16c6-4d30-9f9e-5ce2587b356b/volumes/kubernetes.io~configmap/config Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.188822 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-config" (OuterVolumeSpecName: "config") pod "8dc605ff-16c6-4d30-9f9e-5ce2587b356b" (UID: "8dc605ff-16c6-4d30-9f9e-5ce2587b356b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.188915 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.188933 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.188943 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95rm8\" (UniqueName: \"kubernetes.io/projected/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-kube-api-access-95rm8\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.192532 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8dc605ff-16c6-4d30-9f9e-5ce2587b356b" (UID: "8dc605ff-16c6-4d30-9f9e-5ce2587b356b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.206371 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8dc605ff-16c6-4d30-9f9e-5ce2587b356b" (UID: "8dc605ff-16c6-4d30-9f9e-5ce2587b356b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.207924 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8dc605ff-16c6-4d30-9f9e-5ce2587b356b" (UID: "8dc605ff-16c6-4d30-9f9e-5ce2587b356b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.298221 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.298291 5017 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.298306 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8dc605ff-16c6-4d30-9f9e-5ce2587b356b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.339468 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51f16b0-5c41-45c9-b558-4487b8b05874" path="/var/lib/kubelet/pods/b51f16b0-5c41-45c9-b558-4487b8b05874/volumes" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.465972 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" event={"ID":"3dfc8258-7043-4cfd-ac09-d9e481f12e9d","Type":"ContainerStarted","Data":"8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2"} Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.468181 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" event={"ID":"8dc605ff-16c6-4d30-9f9e-5ce2587b356b","Type":"ContainerDied","Data":"49909de6870911097c44dd9451bec6d1ea5112722e75c929760ba1799173851d"} Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.468254 5017 scope.go:117] "RemoveContainer" containerID="4646929dc8bc8428c95270a26497f0c06fbb97caf3eb99f931896c7106128d7d" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.468194 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-2vhdr" Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.470758 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b5e182e-95d6-49b6-8acc-b7d2cb607b29","Type":"ContainerStarted","Data":"89a894893563f38dffc84c5fa3ebb43a228704a056bd57274fa6a885728e5110"} Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.540728 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-2vhdr"] Jan 29 06:53:28 crc kubenswrapper[5017]: I0129 06:53:28.550243 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-2vhdr"] Jan 29 06:53:29 crc kubenswrapper[5017]: I0129 06:53:29.495705 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad2298ef-344e-4c91-8c8e-83d7254a24d7","Type":"ContainerStarted","Data":"ae263717d3e735843abaf06749a5fb6e17d0f605a5b2b17b3dfcb3956d2a0f68"} Jan 29 06:53:29 crc kubenswrapper[5017]: I0129 06:53:29.506931 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b5e182e-95d6-49b6-8acc-b7d2cb607b29","Type":"ContainerStarted","Data":"73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06"} Jan 29 06:53:29 crc kubenswrapper[5017]: I0129 06:53:29.507008 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b5e182e-95d6-49b6-8acc-b7d2cb607b29","Type":"ContainerStarted","Data":"23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a"} Jan 29 06:53:29 crc kubenswrapper[5017]: I0129 06:53:29.507132 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:29 crc kubenswrapper[5017]: I0129 06:53:29.507678 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerName="glance-log" containerID="cri-o://23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a" gracePeriod=30 Jan 29 06:53:29 crc kubenswrapper[5017]: I0129 06:53:29.507803 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerName="glance-httpd" containerID="cri-o://73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06" gracePeriod=30 Jan 29 06:53:29 crc kubenswrapper[5017]: I0129 06:53:29.541132 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" podStartSLOduration=5.541104581 podStartE2EDuration="5.541104581s" podCreationTimestamp="2026-01-29 06:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:29.534402366 +0000 UTC m=+1095.908849986" watchObservedRunningTime="2026-01-29 06:53:29.541104581 +0000 UTC m=+1095.915552191" Jan 29 06:53:29 crc kubenswrapper[5017]: I0129 06:53:29.585091 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.585066589 podStartE2EDuration="5.585066589s" podCreationTimestamp="2026-01-29 06:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:29.570376729 +0000 UTC m=+1095.944824339" watchObservedRunningTime="2026-01-29 06:53:29.585066589 +0000 UTC m=+1095.959514199" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.134808 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.247714 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.247792 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4cpf\" (UniqueName: \"kubernetes.io/projected/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-kube-api-access-k4cpf\") pod \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.247837 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-config-data\") pod \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.247865 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-public-tls-certs\") pod \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.247915 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-logs\") pod \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.248105 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-combined-ca-bundle\") pod \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.248143 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-httpd-run\") pod \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.248176 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-scripts\") pod \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\" (UID: \"9b5e182e-95d6-49b6-8acc-b7d2cb607b29\") " Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.249336 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-logs" (OuterVolumeSpecName: "logs") pod "9b5e182e-95d6-49b6-8acc-b7d2cb607b29" (UID: "9b5e182e-95d6-49b6-8acc-b7d2cb607b29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.249673 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9b5e182e-95d6-49b6-8acc-b7d2cb607b29" (UID: "9b5e182e-95d6-49b6-8acc-b7d2cb607b29"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.255567 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-scripts" (OuterVolumeSpecName: "scripts") pod "9b5e182e-95d6-49b6-8acc-b7d2cb607b29" (UID: "9b5e182e-95d6-49b6-8acc-b7d2cb607b29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.268210 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "9b5e182e-95d6-49b6-8acc-b7d2cb607b29" (UID: "9b5e182e-95d6-49b6-8acc-b7d2cb607b29"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.276123 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-kube-api-access-k4cpf" (OuterVolumeSpecName: "kube-api-access-k4cpf") pod "9b5e182e-95d6-49b6-8acc-b7d2cb607b29" (UID: "9b5e182e-95d6-49b6-8acc-b7d2cb607b29"). InnerVolumeSpecName "kube-api-access-k4cpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.282903 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b5e182e-95d6-49b6-8acc-b7d2cb607b29" (UID: "9b5e182e-95d6-49b6-8acc-b7d2cb607b29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.310288 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-config-data" (OuterVolumeSpecName: "config-data") pod "9b5e182e-95d6-49b6-8acc-b7d2cb607b29" (UID: "9b5e182e-95d6-49b6-8acc-b7d2cb607b29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.319162 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9b5e182e-95d6-49b6-8acc-b7d2cb607b29" (UID: "9b5e182e-95d6-49b6-8acc-b7d2cb607b29"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.339970 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc605ff-16c6-4d30-9f9e-5ce2587b356b" path="/var/lib/kubelet/pods/8dc605ff-16c6-4d30-9f9e-5ce2587b356b/volumes" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.350697 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.350741 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.350758 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.350769 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.350808 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.350818 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4cpf\" (UniqueName: \"kubernetes.io/projected/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-kube-api-access-k4cpf\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.350833 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.350842 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b5e182e-95d6-49b6-8acc-b7d2cb607b29-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.368388 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.453500 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.527816 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad2298ef-344e-4c91-8c8e-83d7254a24d7","Type":"ContainerStarted","Data":"fc8280ea81e0a88c9f2c87e119fcd2e6a76a240e823a27c3183ab30ec73aa521"} Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.528008 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerName="glance-log" containerID="cri-o://ae263717d3e735843abaf06749a5fb6e17d0f605a5b2b17b3dfcb3956d2a0f68" gracePeriod=30 Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.528116 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerName="glance-httpd" containerID="cri-o://fc8280ea81e0a88c9f2c87e119fcd2e6a76a240e823a27c3183ab30ec73aa521" gracePeriod=30 Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.540057 5017 generic.go:334] "Generic (PLEG): container finished" podID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerID="73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06" exitCode=143 Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.540077 5017 generic.go:334] "Generic (PLEG): container finished" podID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerID="23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a" exitCode=143 Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.540974 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.541412 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b5e182e-95d6-49b6-8acc-b7d2cb607b29","Type":"ContainerDied","Data":"73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06"} Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.541436 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b5e182e-95d6-49b6-8acc-b7d2cb607b29","Type":"ContainerDied","Data":"23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a"} Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.541445 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9b5e182e-95d6-49b6-8acc-b7d2cb607b29","Type":"ContainerDied","Data":"89a894893563f38dffc84c5fa3ebb43a228704a056bd57274fa6a885728e5110"} Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.541461 5017 scope.go:117] "RemoveContainer" containerID="73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.567504 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.567474097 podStartE2EDuration="6.567474097s" podCreationTimestamp="2026-01-29 06:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:30.554789046 +0000 UTC m=+1096.929236676" watchObservedRunningTime="2026-01-29 06:53:30.567474097 +0000 UTC m=+1096.941921707" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.588922 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.598989 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.628420 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:53:30 crc kubenswrapper[5017]: E0129 06:53:30.628929 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51f16b0-5c41-45c9-b558-4487b8b05874" containerName="init" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.628948 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51f16b0-5c41-45c9-b558-4487b8b05874" containerName="init" Jan 29 06:53:30 crc kubenswrapper[5017]: E0129 06:53:30.628992 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerName="glance-httpd" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.628999 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerName="glance-httpd" Jan 29 06:53:30 crc kubenswrapper[5017]: E0129 06:53:30.629021 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerName="glance-log" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.629028 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerName="glance-log" Jan 29 06:53:30 crc kubenswrapper[5017]: E0129 06:53:30.629042 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51f16b0-5c41-45c9-b558-4487b8b05874" containerName="dnsmasq-dns" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.629048 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51f16b0-5c41-45c9-b558-4487b8b05874" containerName="dnsmasq-dns" Jan 29 06:53:30 crc kubenswrapper[5017]: E0129 06:53:30.629057 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc605ff-16c6-4d30-9f9e-5ce2587b356b" containerName="init" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.629062 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc605ff-16c6-4d30-9f9e-5ce2587b356b" containerName="init" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.629221 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc605ff-16c6-4d30-9f9e-5ce2587b356b" containerName="init" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.629237 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerName="glance-log" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.629254 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" containerName="glance-httpd" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.629262 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51f16b0-5c41-45c9-b558-4487b8b05874" containerName="dnsmasq-dns" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.630404 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.635680 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.635989 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.655899 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.758423 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.758481 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.758541 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7pll\" (UniqueName: \"kubernetes.io/projected/296518b1-ea38-473c-a5c5-9378dde1f3ae-kube-api-access-q7pll\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.758572 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.758590 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-logs\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.758609 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.758641 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.758710 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.860829 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7pll\" (UniqueName: \"kubernetes.io/projected/296518b1-ea38-473c-a5c5-9378dde1f3ae-kube-api-access-q7pll\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.860889 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.860912 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-logs\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.860940 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.861037 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.861089 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.861154 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.861179 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.861486 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.862151 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.862863 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-logs\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.874389 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.879696 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-scripts\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.880460 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-config-data\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.895785 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.897170 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7pll\" (UniqueName: \"kubernetes.io/projected/296518b1-ea38-473c-a5c5-9378dde1f3ae-kube-api-access-q7pll\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:30 crc kubenswrapper[5017]: I0129 06:53:30.911331 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " pod="openstack/glance-default-external-api-0" Jan 29 06:53:31 crc kubenswrapper[5017]: I0129 06:53:31.015036 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:53:31 crc kubenswrapper[5017]: E0129 06:53:31.108326 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cc0f3ac_f0bb_434c_8d72_d22fdedf2f2d.slice/crio-conmon-ca433794f725d27e083742abb5dc11becd94156c42fefff51806909b06722939.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cc0f3ac_f0bb_434c_8d72_d22fdedf2f2d.slice/crio-ca433794f725d27e083742abb5dc11becd94156c42fefff51806909b06722939.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:53:31 crc kubenswrapper[5017]: I0129 06:53:31.554318 5017 generic.go:334] "Generic (PLEG): container finished" podID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerID="fc8280ea81e0a88c9f2c87e119fcd2e6a76a240e823a27c3183ab30ec73aa521" exitCode=0 Jan 29 06:53:31 crc kubenswrapper[5017]: I0129 06:53:31.554682 5017 generic.go:334] "Generic (PLEG): container finished" podID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerID="ae263717d3e735843abaf06749a5fb6e17d0f605a5b2b17b3dfcb3956d2a0f68" exitCode=143 Jan 29 06:53:31 crc kubenswrapper[5017]: I0129 06:53:31.554533 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad2298ef-344e-4c91-8c8e-83d7254a24d7","Type":"ContainerDied","Data":"fc8280ea81e0a88c9f2c87e119fcd2e6a76a240e823a27c3183ab30ec73aa521"} Jan 29 06:53:31 crc kubenswrapper[5017]: I0129 06:53:31.554773 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad2298ef-344e-4c91-8c8e-83d7254a24d7","Type":"ContainerDied","Data":"ae263717d3e735843abaf06749a5fb6e17d0f605a5b2b17b3dfcb3956d2a0f68"} Jan 29 06:53:31 crc kubenswrapper[5017]: I0129 06:53:31.556840 5017 generic.go:334] "Generic (PLEG): container finished" podID="6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" containerID="ca433794f725d27e083742abb5dc11becd94156c42fefff51806909b06722939" exitCode=0 Jan 29 06:53:31 crc kubenswrapper[5017]: I0129 06:53:31.556878 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rksqd" event={"ID":"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d","Type":"ContainerDied","Data":"ca433794f725d27e083742abb5dc11becd94156c42fefff51806909b06722939"} Jan 29 06:53:32 crc kubenswrapper[5017]: I0129 06:53:32.338664 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b5e182e-95d6-49b6-8acc-b7d2cb607b29" path="/var/lib/kubelet/pods/9b5e182e-95d6-49b6-8acc-b7d2cb607b29/volumes" Jan 29 06:53:35 crc kubenswrapper[5017]: I0129 06:53:35.412116 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:53:35 crc kubenswrapper[5017]: I0129 06:53:35.484943 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkl9j"] Jan 29 06:53:35 crc kubenswrapper[5017]: I0129 06:53:35.485302 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" podUID="3f100233-315d-4338-815e-8f12beaeaaae" containerName="dnsmasq-dns" containerID="cri-o://fccc9561a7f3897ac77ff53249df9d30469e8ab1cc6f35cc649eef07cd4b5527" gracePeriod=10 Jan 29 06:53:35 crc kubenswrapper[5017]: I0129 06:53:35.966633 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" podUID="3f100233-315d-4338-815e-8f12beaeaaae" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Jan 29 06:53:36 crc kubenswrapper[5017]: I0129 06:53:36.662671 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" event={"ID":"3f100233-315d-4338-815e-8f12beaeaaae","Type":"ContainerDied","Data":"fccc9561a7f3897ac77ff53249df9d30469e8ab1cc6f35cc649eef07cd4b5527"} Jan 29 06:53:36 crc kubenswrapper[5017]: I0129 06:53:36.665623 5017 generic.go:334] "Generic (PLEG): container finished" podID="3f100233-315d-4338-815e-8f12beaeaaae" containerID="fccc9561a7f3897ac77ff53249df9d30469e8ab1cc6f35cc649eef07cd4b5527" exitCode=0 Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.096567 5017 scope.go:117] "RemoveContainer" containerID="23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.588125 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.596945 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625349 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-config-data\") pod \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625411 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-internal-tls-certs\") pod \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625487 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-fernet-keys\") pod \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625573 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-credential-keys\") pod \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625645 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-scripts\") pod \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625695 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-config-data\") pod \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625742 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625774 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqhvv\" (UniqueName: \"kubernetes.io/projected/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-kube-api-access-zqhvv\") pod \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625870 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-combined-ca-bundle\") pod \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625905 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-logs\") pod \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625944 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-scripts\") pod \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\" (UID: \"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.625989 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-combined-ca-bundle\") pod \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.626014 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-httpd-run\") pod \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.626036 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd277\" (UniqueName: \"kubernetes.io/projected/ad2298ef-344e-4c91-8c8e-83d7254a24d7-kube-api-access-hd277\") pod \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\" (UID: \"ad2298ef-344e-4c91-8c8e-83d7254a24d7\") " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.647618 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ad2298ef-344e-4c91-8c8e-83d7254a24d7" (UID: "ad2298ef-344e-4c91-8c8e-83d7254a24d7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.648437 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-logs" (OuterVolumeSpecName: "logs") pod "ad2298ef-344e-4c91-8c8e-83d7254a24d7" (UID: "ad2298ef-344e-4c91-8c8e-83d7254a24d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.648532 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "ad2298ef-344e-4c91-8c8e-83d7254a24d7" (UID: "ad2298ef-344e-4c91-8c8e-83d7254a24d7"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.648647 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad2298ef-344e-4c91-8c8e-83d7254a24d7-kube-api-access-hd277" (OuterVolumeSpecName: "kube-api-access-hd277") pod "ad2298ef-344e-4c91-8c8e-83d7254a24d7" (UID: "ad2298ef-344e-4c91-8c8e-83d7254a24d7"). InnerVolumeSpecName "kube-api-access-hd277". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.655337 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-kube-api-access-zqhvv" (OuterVolumeSpecName: "kube-api-access-zqhvv") pod "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" (UID: "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d"). InnerVolumeSpecName "kube-api-access-zqhvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.660035 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-scripts" (OuterVolumeSpecName: "scripts") pod "ad2298ef-344e-4c91-8c8e-83d7254a24d7" (UID: "ad2298ef-344e-4c91-8c8e-83d7254a24d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.660553 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" (UID: "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.662146 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" (UID: "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.676554 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-scripts" (OuterVolumeSpecName: "scripts") pod "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" (UID: "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.707701 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad2298ef-344e-4c91-8c8e-83d7254a24d7" (UID: "ad2298ef-344e-4c91-8c8e-83d7254a24d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.720606 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" (UID: "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.721495 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ad2298ef-344e-4c91-8c8e-83d7254a24d7","Type":"ContainerDied","Data":"277f670ddf29d756bc4c5d6109462dcddf7dbf596d62e68f605e375f8e2f8d2f"} Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.721603 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.726531 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-config-data" (OuterVolumeSpecName: "config-data") pod "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" (UID: "6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.728620 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rksqd" event={"ID":"6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d","Type":"ContainerDied","Data":"13d3672c38a12267d4ac8a863dd4c6bd47e9db2161a656d291977509363a4f47"} Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.728657 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13d3672c38a12267d4ac8a863dd4c6bd47e9db2161a656d291977509363a4f47" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.728762 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rksqd" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736474 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736517 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736548 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736558 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqhvv\" (UniqueName: \"kubernetes.io/projected/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-kube-api-access-zqhvv\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736570 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736583 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736591 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736600 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736609 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad2298ef-344e-4c91-8c8e-83d7254a24d7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736620 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd277\" (UniqueName: \"kubernetes.io/projected/ad2298ef-344e-4c91-8c8e-83d7254a24d7-kube-api-access-hd277\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736631 5017 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.736640 5017 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.744236 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-config-data" (OuterVolumeSpecName: "config-data") pod "ad2298ef-344e-4c91-8c8e-83d7254a24d7" (UID: "ad2298ef-344e-4c91-8c8e-83d7254a24d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.749351 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ad2298ef-344e-4c91-8c8e-83d7254a24d7" (UID: "ad2298ef-344e-4c91-8c8e-83d7254a24d7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.761848 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.838898 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.838950 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:38 crc kubenswrapper[5017]: I0129 06:53:38.839059 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2298ef-344e-4c91-8c8e-83d7254a24d7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.087343 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.100480 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.109019 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:53:39 crc kubenswrapper[5017]: E0129 06:53:39.109535 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" containerName="keystone-bootstrap" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.109551 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" containerName="keystone-bootstrap" Jan 29 06:53:39 crc kubenswrapper[5017]: E0129 06:53:39.109571 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerName="glance-httpd" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.109578 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerName="glance-httpd" Jan 29 06:53:39 crc kubenswrapper[5017]: E0129 06:53:39.109603 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerName="glance-log" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.109610 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerName="glance-log" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.109818 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerName="glance-log" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.109836 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" containerName="keystone-bootstrap" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.109850 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" containerName="glance-httpd" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.110921 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.114680 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.114868 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.144463 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.144526 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.144552 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.144578 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.144617 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.144692 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.144733 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmk85\" (UniqueName: \"kubernetes.io/projected/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-kube-api-access-nmk85\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.144777 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.150378 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.246769 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.246881 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.246916 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.246934 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.246976 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.247009 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.247060 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.247089 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.247099 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmk85\" (UniqueName: \"kubernetes.io/projected/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-kube-api-access-nmk85\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.247573 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.247589 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.253305 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.254135 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.255570 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.264346 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.266131 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmk85\" (UniqueName: \"kubernetes.io/projected/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-kube-api-access-nmk85\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.272715 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.448123 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.693035 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rksqd"] Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.701870 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rksqd"] Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.775844 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t4552"] Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.777703 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.781354 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.781374 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.781421 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.781681 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.781853 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rv2nk" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.787120 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t4552"] Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.869328 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-config-data\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.869390 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-combined-ca-bundle\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.869431 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-scripts\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.869463 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-fernet-keys\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.869511 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-credential-keys\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.869558 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5db9x\" (UniqueName: \"kubernetes.io/projected/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-kube-api-access-5db9x\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.970557 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-credential-keys\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.970675 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5db9x\" (UniqueName: \"kubernetes.io/projected/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-kube-api-access-5db9x\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.971648 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-config-data\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.972149 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-combined-ca-bundle\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.972206 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-scripts\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.972244 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-fernet-keys\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.976476 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-combined-ca-bundle\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.977283 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-scripts\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.980438 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-credential-keys\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.982649 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-fernet-keys\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.986667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-config-data\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:39 crc kubenswrapper[5017]: I0129 06:53:39.989275 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5db9x\" (UniqueName: \"kubernetes.io/projected/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-kube-api-access-5db9x\") pod \"keystone-bootstrap-t4552\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:40 crc kubenswrapper[5017]: I0129 06:53:40.110660 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:40 crc kubenswrapper[5017]: I0129 06:53:40.331523 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d" path="/var/lib/kubelet/pods/6cc0f3ac-f0bb-434c-8d72-d22fdedf2f2d/volumes" Jan 29 06:53:40 crc kubenswrapper[5017]: I0129 06:53:40.332660 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad2298ef-344e-4c91-8c8e-83d7254a24d7" path="/var/lib/kubelet/pods/ad2298ef-344e-4c91-8c8e-83d7254a24d7/volumes" Jan 29 06:53:45 crc kubenswrapper[5017]: I0129 06:53:45.968583 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" podUID="3f100233-315d-4338-815e-8f12beaeaaae" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.427041 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.539324 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-sb\") pod \"3f100233-315d-4338-815e-8f12beaeaaae\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.540450 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-dns-svc\") pod \"3f100233-315d-4338-815e-8f12beaeaaae\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.540733 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx7mk\" (UniqueName: \"kubernetes.io/projected/3f100233-315d-4338-815e-8f12beaeaaae-kube-api-access-vx7mk\") pod \"3f100233-315d-4338-815e-8f12beaeaaae\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.540823 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-nb\") pod \"3f100233-315d-4338-815e-8f12beaeaaae\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.540860 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-config\") pod \"3f100233-315d-4338-815e-8f12beaeaaae\" (UID: \"3f100233-315d-4338-815e-8f12beaeaaae\") " Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.547691 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f100233-315d-4338-815e-8f12beaeaaae-kube-api-access-vx7mk" (OuterVolumeSpecName: "kube-api-access-vx7mk") pod "3f100233-315d-4338-815e-8f12beaeaaae" (UID: "3f100233-315d-4338-815e-8f12beaeaaae"). InnerVolumeSpecName "kube-api-access-vx7mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.588874 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-config" (OuterVolumeSpecName: "config") pod "3f100233-315d-4338-815e-8f12beaeaaae" (UID: "3f100233-315d-4338-815e-8f12beaeaaae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.592606 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f100233-315d-4338-815e-8f12beaeaaae" (UID: "3f100233-315d-4338-815e-8f12beaeaaae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.597369 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f100233-315d-4338-815e-8f12beaeaaae" (UID: "3f100233-315d-4338-815e-8f12beaeaaae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.597489 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f100233-315d-4338-815e-8f12beaeaaae" (UID: "3f100233-315d-4338-815e-8f12beaeaaae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.643484 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx7mk\" (UniqueName: \"kubernetes.io/projected/3f100233-315d-4338-815e-8f12beaeaaae-kube-api-access-vx7mk\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.643530 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.643541 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.643552 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.643561 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f100233-315d-4338-815e-8f12beaeaaae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.832748 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" event={"ID":"3f100233-315d-4338-815e-8f12beaeaaae","Type":"ContainerDied","Data":"32e52e6b80bfde4a230f944fe4a3edecee8e079f5de667b479a866bc6a2f6ec9"} Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.832893 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.874103 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkl9j"] Jan 29 06:53:47 crc kubenswrapper[5017]: I0129 06:53:47.882366 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkl9j"] Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.336612 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f100233-315d-4338-815e-8f12beaeaaae" path="/var/lib/kubelet/pods/3f100233-315d-4338-815e-8f12beaeaaae/volumes" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.820075 5017 scope.go:117] "RemoveContainer" containerID="73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06" Jan 29 06:53:48 crc kubenswrapper[5017]: E0129 06:53:48.821454 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06\": container with ID starting with 73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06 not found: ID does not exist" containerID="73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.821510 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06"} err="failed to get container status \"73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06\": rpc error: code = NotFound desc = could not find container \"73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06\": container with ID starting with 73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06 not found: ID does not exist" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.821550 5017 scope.go:117] "RemoveContainer" containerID="23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a" Jan 29 06:53:48 crc kubenswrapper[5017]: E0129 06:53:48.822224 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a\": container with ID starting with 23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a not found: ID does not exist" containerID="23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.822276 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a"} err="failed to get container status \"23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a\": rpc error: code = NotFound desc = could not find container \"23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a\": container with ID starting with 23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a not found: ID does not exist" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.822300 5017 scope.go:117] "RemoveContainer" containerID="73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.822900 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06"} err="failed to get container status \"73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06\": rpc error: code = NotFound desc = could not find container \"73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06\": container with ID starting with 73b41c7ec0e8ce8af263593d1d1d71a4003a290f0896849f9c2df13b89459a06 not found: ID does not exist" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.822936 5017 scope.go:117] "RemoveContainer" containerID="23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.823452 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a"} err="failed to get container status \"23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a\": rpc error: code = NotFound desc = could not find container \"23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a\": container with ID starting with 23071a56c89fca6ad0208917e44ff4c66a8c08c0ddd9819e2cf1883b6c9f515a not found: ID does not exist" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.823511 5017 scope.go:117] "RemoveContainer" containerID="fc8280ea81e0a88c9f2c87e119fcd2e6a76a240e823a27c3183ab30ec73aa521" Jan 29 06:53:48 crc kubenswrapper[5017]: E0129 06:53:48.857692 5017 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 29 06:53:48 crc kubenswrapper[5017]: E0129 06:53:48.857931 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xdz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-c99rc_openstack(31e5ea57-0c73-4c76-bbcb-6d3b665b6226): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 06:53:48 crc kubenswrapper[5017]: E0129 06:53:48.860226 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-c99rc" podUID="31e5ea57-0c73-4c76-bbcb-6d3b665b6226" Jan 29 06:53:48 crc kubenswrapper[5017]: I0129 06:53:48.898370 5017 scope.go:117] "RemoveContainer" containerID="ae263717d3e735843abaf06749a5fb6e17d0f605a5b2b17b3dfcb3956d2a0f68" Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.105400 5017 scope.go:117] "RemoveContainer" containerID="fccc9561a7f3897ac77ff53249df9d30469e8ab1cc6f35cc649eef07cd4b5527" Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.163823 5017 scope.go:117] "RemoveContainer" containerID="0fbcb35f5e1442eb859e4c507b0be33fb72bc533f83d4e1a7af7b7a85ab697de" Jan 29 06:53:49 crc kubenswrapper[5017]: W0129 06:53:49.476757 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91ff38c8_c1d9_48bd_a593_829ed49d4c2d.slice/crio-af2921d3fd7738aee45bbc20ecc68879a53b725f98c855e7eb2074e4c6ae897d WatchSource:0}: Error finding container af2921d3fd7738aee45bbc20ecc68879a53b725f98c855e7eb2074e4c6ae897d: Status 404 returned error can't find the container with id af2921d3fd7738aee45bbc20ecc68879a53b725f98c855e7eb2074e4c6ae897d Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.490362 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t4552"] Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.614874 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:53:49 crc kubenswrapper[5017]: W0129 06:53:49.619485 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c3408ec_55f8_4e9b_8e76_92e0d6e6dabc.slice/crio-f059363742b4bb92f6c4843544ae4c4c71a363059e06e95e90d294974528a3ed WatchSource:0}: Error finding container f059363742b4bb92f6c4843544ae4c4c71a363059e06e95e90d294974528a3ed: Status 404 returned error can't find the container with id f059363742b4bb92f6c4843544ae4c4c71a363059e06e95e90d294974528a3ed Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.858744 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerStarted","Data":"3532343237aa5a78f92f88f321a808014a552e93f04b7c2ba2f4c217e74783c7"} Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.865320 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tmkrp" event={"ID":"3cc3448b-f305-47b7-b2f9-32b61477ac21","Type":"ContainerStarted","Data":"7966f6aff7ce81cffecfcf8cdd883b5dbb79939b4152cdec01a78f8318795a91"} Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.869029 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4552" event={"ID":"91ff38c8-c1d9-48bd-a593-829ed49d4c2d","Type":"ContainerStarted","Data":"9bc47a011cbee9c1f0f45b9670ed8044c92807a4765cd8f12b9edc01353442b0"} Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.869083 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4552" event={"ID":"91ff38c8-c1d9-48bd-a593-829ed49d4c2d","Type":"ContainerStarted","Data":"af2921d3fd7738aee45bbc20ecc68879a53b725f98c855e7eb2074e4c6ae897d"} Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.878438 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wjsjf" event={"ID":"af5eea12-20bf-45b6-b989-f77529ea2f04","Type":"ContainerStarted","Data":"10776bfed6b6bed671c469db271afab32367944233580d2f71aac9a20d6c8c5f"} Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.881862 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tmkrp" podStartSLOduration=3.483588559 podStartE2EDuration="25.881831855s" podCreationTimestamp="2026-01-29 06:53:24 +0000 UTC" firstStartedPulling="2026-01-29 06:53:26.454286994 +0000 UTC m=+1092.828734594" lastFinishedPulling="2026-01-29 06:53:48.85253028 +0000 UTC m=+1115.226977890" observedRunningTime="2026-01-29 06:53:49.879588669 +0000 UTC m=+1116.254036279" watchObservedRunningTime="2026-01-29 06:53:49.881831855 +0000 UTC m=+1116.256279465" Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.889717 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc","Type":"ContainerStarted","Data":"f059363742b4bb92f6c4843544ae4c4c71a363059e06e95e90d294974528a3ed"} Jan 29 06:53:49 crc kubenswrapper[5017]: E0129 06:53:49.895308 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-c99rc" podUID="31e5ea57-0c73-4c76-bbcb-6d3b665b6226" Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.907298 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wjsjf" podStartSLOduration=3.407613839 podStartE2EDuration="25.907275333s" podCreationTimestamp="2026-01-29 06:53:24 +0000 UTC" firstStartedPulling="2026-01-29 06:53:26.358136646 +0000 UTC m=+1092.732584256" lastFinishedPulling="2026-01-29 06:53:48.85779814 +0000 UTC m=+1115.232245750" observedRunningTime="2026-01-29 06:53:49.896814754 +0000 UTC m=+1116.271262364" watchObservedRunningTime="2026-01-29 06:53:49.907275333 +0000 UTC m=+1116.281722953" Jan 29 06:53:49 crc kubenswrapper[5017]: I0129 06:53:49.929173 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t4552" podStartSLOduration=10.929137243 podStartE2EDuration="10.929137243s" podCreationTimestamp="2026-01-29 06:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:49.923324779 +0000 UTC m=+1116.297772389" watchObservedRunningTime="2026-01-29 06:53:49.929137243 +0000 UTC m=+1116.303584853" Jan 29 06:53:50 crc kubenswrapper[5017]: I0129 06:53:50.456155 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:53:50 crc kubenswrapper[5017]: I0129 06:53:50.910710 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296518b1-ea38-473c-a5c5-9378dde1f3ae","Type":"ContainerStarted","Data":"f600f50cf4491301e62c2ea8558346bc0eb7e94e3b8fd6364c32e7a818543da3"} Jan 29 06:53:50 crc kubenswrapper[5017]: I0129 06:53:50.913520 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc","Type":"ContainerStarted","Data":"3cbee6ed26d3997c1f3865c063a98a6ce5c0673472463538e82c2d760f23dc03"} Jan 29 06:53:50 crc kubenswrapper[5017]: I0129 06:53:50.913587 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc","Type":"ContainerStarted","Data":"d7944d62d295e9727fd3d5ae293bc08722b103a1d0a82d234d601cc5e7497025"} Jan 29 06:53:50 crc kubenswrapper[5017]: I0129 06:53:50.946088 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.946061401 podStartE2EDuration="11.946061401s" podCreationTimestamp="2026-01-29 06:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:50.94436401 +0000 UTC m=+1117.318811620" watchObservedRunningTime="2026-01-29 06:53:50.946061401 +0000 UTC m=+1117.320509011" Jan 29 06:53:50 crc kubenswrapper[5017]: I0129 06:53:50.970058 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-vkl9j" podUID="3f100233-315d-4338-815e-8f12beaeaaae" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Jan 29 06:53:51 crc kubenswrapper[5017]: I0129 06:53:51.939698 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296518b1-ea38-473c-a5c5-9378dde1f3ae","Type":"ContainerStarted","Data":"2a3e23eeed54c8ed117c78f312d8a254e83795258ad657ac54cd23c444f93ae9"} Jan 29 06:53:51 crc kubenswrapper[5017]: I0129 06:53:51.940487 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296518b1-ea38-473c-a5c5-9378dde1f3ae","Type":"ContainerStarted","Data":"e64e5c2ac450851ff57a116131b0a50375c4fb0c2fa931a51e57907788f9e3e3"} Jan 29 06:53:51 crc kubenswrapper[5017]: I0129 06:53:51.973658 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.973629473 podStartE2EDuration="21.973629473s" podCreationTimestamp="2026-01-29 06:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:53:51.968580388 +0000 UTC m=+1118.343027998" watchObservedRunningTime="2026-01-29 06:53:51.973629473 +0000 UTC m=+1118.348077093" Jan 29 06:53:52 crc kubenswrapper[5017]: I0129 06:53:52.949171 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerStarted","Data":"2a7de7ba906eb493a55aae1e90816a05485d4884963524286fe87604184303b6"} Jan 29 06:53:52 crc kubenswrapper[5017]: I0129 06:53:52.951522 5017 generic.go:334] "Generic (PLEG): container finished" podID="3cc3448b-f305-47b7-b2f9-32b61477ac21" containerID="7966f6aff7ce81cffecfcf8cdd883b5dbb79939b4152cdec01a78f8318795a91" exitCode=0 Jan 29 06:53:52 crc kubenswrapper[5017]: I0129 06:53:52.951606 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tmkrp" event={"ID":"3cc3448b-f305-47b7-b2f9-32b61477ac21","Type":"ContainerDied","Data":"7966f6aff7ce81cffecfcf8cdd883b5dbb79939b4152cdec01a78f8318795a91"} Jan 29 06:53:52 crc kubenswrapper[5017]: I0129 06:53:52.953306 5017 generic.go:334] "Generic (PLEG): container finished" podID="7d79ee9c-086e-405e-a8d6-478823059f00" containerID="d610e404469ca76ec8329edcb79d636e2d0e6b2aa686e2a81c9c8111278f86cd" exitCode=0 Jan 29 06:53:52 crc kubenswrapper[5017]: I0129 06:53:52.953420 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q5xff" event={"ID":"7d79ee9c-086e-405e-a8d6-478823059f00","Type":"ContainerDied","Data":"d610e404469ca76ec8329edcb79d636e2d0e6b2aa686e2a81c9c8111278f86cd"} Jan 29 06:53:53 crc kubenswrapper[5017]: I0129 06:53:53.967465 5017 generic.go:334] "Generic (PLEG): container finished" podID="91ff38c8-c1d9-48bd-a593-829ed49d4c2d" containerID="9bc47a011cbee9c1f0f45b9670ed8044c92807a4765cd8f12b9edc01353442b0" exitCode=0 Jan 29 06:53:53 crc kubenswrapper[5017]: I0129 06:53:53.967562 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4552" event={"ID":"91ff38c8-c1d9-48bd-a593-829ed49d4c2d","Type":"ContainerDied","Data":"9bc47a011cbee9c1f0f45b9670ed8044c92807a4765cd8f12b9edc01353442b0"} Jan 29 06:53:53 crc kubenswrapper[5017]: I0129 06:53:53.969567 5017 generic.go:334] "Generic (PLEG): container finished" podID="af5eea12-20bf-45b6-b989-f77529ea2f04" containerID="10776bfed6b6bed671c469db271afab32367944233580d2f71aac9a20d6c8c5f" exitCode=0 Jan 29 06:53:53 crc kubenswrapper[5017]: I0129 06:53:53.969707 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wjsjf" event={"ID":"af5eea12-20bf-45b6-b989-f77529ea2f04","Type":"ContainerDied","Data":"10776bfed6b6bed671c469db271afab32367944233580d2f71aac9a20d6c8c5f"} Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.450214 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.456819 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.509576 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-config\") pod \"7d79ee9c-086e-405e-a8d6-478823059f00\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.509724 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-combined-ca-bundle\") pod \"7d79ee9c-086e-405e-a8d6-478823059f00\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.509938 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps5t9\" (UniqueName: \"kubernetes.io/projected/7d79ee9c-086e-405e-a8d6-478823059f00-kube-api-access-ps5t9\") pod \"7d79ee9c-086e-405e-a8d6-478823059f00\" (UID: \"7d79ee9c-086e-405e-a8d6-478823059f00\") " Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.529218 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d79ee9c-086e-405e-a8d6-478823059f00-kube-api-access-ps5t9" (OuterVolumeSpecName: "kube-api-access-ps5t9") pod "7d79ee9c-086e-405e-a8d6-478823059f00" (UID: "7d79ee9c-086e-405e-a8d6-478823059f00"). InnerVolumeSpecName "kube-api-access-ps5t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.536184 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d79ee9c-086e-405e-a8d6-478823059f00" (UID: "7d79ee9c-086e-405e-a8d6-478823059f00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.537391 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-config" (OuterVolumeSpecName: "config") pod "7d79ee9c-086e-405e-a8d6-478823059f00" (UID: "7d79ee9c-086e-405e-a8d6-478823059f00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.611675 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-scripts\") pod \"3cc3448b-f305-47b7-b2f9-32b61477ac21\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.611806 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cc3448b-f305-47b7-b2f9-32b61477ac21-logs\") pod \"3cc3448b-f305-47b7-b2f9-32b61477ac21\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.611849 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-combined-ca-bundle\") pod \"3cc3448b-f305-47b7-b2f9-32b61477ac21\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.611916 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-config-data\") pod \"3cc3448b-f305-47b7-b2f9-32b61477ac21\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.612090 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vnqp\" (UniqueName: \"kubernetes.io/projected/3cc3448b-f305-47b7-b2f9-32b61477ac21-kube-api-access-5vnqp\") pod \"3cc3448b-f305-47b7-b2f9-32b61477ac21\" (UID: \"3cc3448b-f305-47b7-b2f9-32b61477ac21\") " Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.612772 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.612790 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps5t9\" (UniqueName: \"kubernetes.io/projected/7d79ee9c-086e-405e-a8d6-478823059f00-kube-api-access-ps5t9\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.612806 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d79ee9c-086e-405e-a8d6-478823059f00-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.613139 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc3448b-f305-47b7-b2f9-32b61477ac21-logs" (OuterVolumeSpecName: "logs") pod "3cc3448b-f305-47b7-b2f9-32b61477ac21" (UID: "3cc3448b-f305-47b7-b2f9-32b61477ac21"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.620942 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc3448b-f305-47b7-b2f9-32b61477ac21-kube-api-access-5vnqp" (OuterVolumeSpecName: "kube-api-access-5vnqp") pod "3cc3448b-f305-47b7-b2f9-32b61477ac21" (UID: "3cc3448b-f305-47b7-b2f9-32b61477ac21"). InnerVolumeSpecName "kube-api-access-5vnqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.621510 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-scripts" (OuterVolumeSpecName: "scripts") pod "3cc3448b-f305-47b7-b2f9-32b61477ac21" (UID: "3cc3448b-f305-47b7-b2f9-32b61477ac21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.655204 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-config-data" (OuterVolumeSpecName: "config-data") pod "3cc3448b-f305-47b7-b2f9-32b61477ac21" (UID: "3cc3448b-f305-47b7-b2f9-32b61477ac21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.657257 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cc3448b-f305-47b7-b2f9-32b61477ac21" (UID: "3cc3448b-f305-47b7-b2f9-32b61477ac21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.714884 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.714947 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cc3448b-f305-47b7-b2f9-32b61477ac21-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.714979 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.714993 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc3448b-f305-47b7-b2f9-32b61477ac21-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.715006 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vnqp\" (UniqueName: \"kubernetes.io/projected/3cc3448b-f305-47b7-b2f9-32b61477ac21-kube-api-access-5vnqp\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.981328 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-q5xff" event={"ID":"7d79ee9c-086e-405e-a8d6-478823059f00","Type":"ContainerDied","Data":"02f54322aa96a68a67b48e92e7782eb38f981cbb47df2051b00eaec734db8b96"} Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.981380 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f54322aa96a68a67b48e92e7782eb38f981cbb47df2051b00eaec734db8b96" Jan 29 06:53:54 crc kubenswrapper[5017]: I0129 06:53:54.981456 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-q5xff" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.002478 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tmkrp" event={"ID":"3cc3448b-f305-47b7-b2f9-32b61477ac21","Type":"ContainerDied","Data":"7fd2e8f3ad7833e49965f38edb5ba2926bef07ecde903562be671bc910027370"} Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.002553 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fd2e8f3ad7833e49965f38edb5ba2926bef07ecde903562be671bc910027370" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.002569 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tmkrp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.101042 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8548d8d696-gk4rx"] Jan 29 06:53:55 crc kubenswrapper[5017]: E0129 06:53:55.101561 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f100233-315d-4338-815e-8f12beaeaaae" containerName="init" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.101579 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f100233-315d-4338-815e-8f12beaeaaae" containerName="init" Jan 29 06:53:55 crc kubenswrapper[5017]: E0129 06:53:55.101591 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d79ee9c-086e-405e-a8d6-478823059f00" containerName="neutron-db-sync" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.101599 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d79ee9c-086e-405e-a8d6-478823059f00" containerName="neutron-db-sync" Jan 29 06:53:55 crc kubenswrapper[5017]: E0129 06:53:55.101606 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f100233-315d-4338-815e-8f12beaeaaae" containerName="dnsmasq-dns" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.101613 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f100233-315d-4338-815e-8f12beaeaaae" containerName="dnsmasq-dns" Jan 29 06:53:55 crc kubenswrapper[5017]: E0129 06:53:55.101624 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc3448b-f305-47b7-b2f9-32b61477ac21" containerName="placement-db-sync" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.101630 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc3448b-f305-47b7-b2f9-32b61477ac21" containerName="placement-db-sync" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.101815 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc3448b-f305-47b7-b2f9-32b61477ac21" containerName="placement-db-sync" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.101833 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f100233-315d-4338-815e-8f12beaeaaae" containerName="dnsmasq-dns" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.101848 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d79ee9c-086e-405e-a8d6-478823059f00" containerName="neutron-db-sync" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.106813 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.118108 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8548d8d696-gk4rx"] Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.120157 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.120437 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wtn27" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.120486 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.124381 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.125976 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.227578 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-internal-tls-certs\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.227627 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vq7k\" (UniqueName: \"kubernetes.io/projected/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-kube-api-access-2vq7k\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.227663 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-logs\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.227684 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-combined-ca-bundle\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.227707 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-config-data\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.227737 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-public-tls-certs\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.227836 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-scripts\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.270752 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685444497c-f78qp"] Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.272782 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.317256 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-f78qp"] Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.329485 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.329786 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-internal-tls-certs\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.329818 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vq7k\" (UniqueName: \"kubernetes.io/projected/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-kube-api-access-2vq7k\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.329849 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.329880 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-logs\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.329908 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-combined-ca-bundle\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.329951 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-config-data\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.330000 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-public-tls-certs\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.330059 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-svc\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.330128 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-config\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.330157 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwjs9\" (UniqueName: \"kubernetes.io/projected/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-kube-api-access-mwjs9\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.330201 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.330227 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-scripts\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.336731 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-internal-tls-certs\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.342701 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-logs\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.344313 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-scripts\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.372629 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b944f8dd4-2rk49"] Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.375673 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-combined-ca-bundle\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.376737 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.384681 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.389710 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.390146 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-config-data\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.411797 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vq7k\" (UniqueName: \"kubernetes.io/projected/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-kube-api-access-2vq7k\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.412539 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-public-tls-certs\") pod \"placement-8548d8d696-gk4rx\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.413765 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7nc5b" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.414103 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435423 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-config\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435495 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-svc\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435534 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-combined-ca-bundle\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435623 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-config\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435645 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwjs9\" (UniqueName: \"kubernetes.io/projected/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-kube-api-access-mwjs9\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435705 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435729 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-ovndb-tls-certs\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435808 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjmm\" (UniqueName: \"kubernetes.io/projected/a9431961-983b-4257-bbe6-cf1bac1261c0-kube-api-access-dzjmm\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435834 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-httpd-config\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435859 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.435946 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.443640 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.446002 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.447785 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-svc\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.462670 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-config\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.465928 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.476432 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwjs9\" (UniqueName: \"kubernetes.io/projected/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-kube-api-access-mwjs9\") pod \"dnsmasq-dns-685444497c-f78qp\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.478910 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b944f8dd4-2rk49"] Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.480991 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.504088 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.538422 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-ovndb-tls-certs\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.538495 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzjmm\" (UniqueName: \"kubernetes.io/projected/a9431961-983b-4257-bbe6-cf1bac1261c0-kube-api-access-dzjmm\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.538517 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-httpd-config\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.538612 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-config\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.538645 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-combined-ca-bundle\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.543942 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-combined-ca-bundle\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.546258 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-ovndb-tls-certs\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.549378 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-config\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.550700 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-httpd-config\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.562766 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzjmm\" (UniqueName: \"kubernetes.io/projected/a9431961-983b-4257-bbe6-cf1bac1261c0-kube-api-access-dzjmm\") pod \"neutron-7b944f8dd4-2rk49\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:55 crc kubenswrapper[5017]: I0129 06:53:55.596906 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.539332 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.541499 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.830232 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.968344 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5db9x\" (UniqueName: \"kubernetes.io/projected/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-kube-api-access-5db9x\") pod \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.968428 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-fernet-keys\") pod \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.968586 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-combined-ca-bundle\") pod \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.968663 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-scripts\") pod \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.968701 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-credential-keys\") pod \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.968829 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-config-data\") pod \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\" (UID: \"91ff38c8-c1d9-48bd-a593-829ed49d4c2d\") " Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.974406 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "91ff38c8-c1d9-48bd-a593-829ed49d4c2d" (UID: "91ff38c8-c1d9-48bd-a593-829ed49d4c2d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.976198 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-scripts" (OuterVolumeSpecName: "scripts") pod "91ff38c8-c1d9-48bd-a593-829ed49d4c2d" (UID: "91ff38c8-c1d9-48bd-a593-829ed49d4c2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.977056 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-kube-api-access-5db9x" (OuterVolumeSpecName: "kube-api-access-5db9x") pod "91ff38c8-c1d9-48bd-a593-829ed49d4c2d" (UID: "91ff38c8-c1d9-48bd-a593-829ed49d4c2d"). InnerVolumeSpecName "kube-api-access-5db9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:56 crc kubenswrapper[5017]: I0129 06:53:56.977863 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "91ff38c8-c1d9-48bd-a593-829ed49d4c2d" (UID: "91ff38c8-c1d9-48bd-a593-829ed49d4c2d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.002122 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91ff38c8-c1d9-48bd-a593-829ed49d4c2d" (UID: "91ff38c8-c1d9-48bd-a593-829ed49d4c2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.003430 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-config-data" (OuterVolumeSpecName: "config-data") pod "91ff38c8-c1d9-48bd-a593-829ed49d4c2d" (UID: "91ff38c8-c1d9-48bd-a593-829ed49d4c2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.028087 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4552" event={"ID":"91ff38c8-c1d9-48bd-a593-829ed49d4c2d","Type":"ContainerDied","Data":"af2921d3fd7738aee45bbc20ecc68879a53b725f98c855e7eb2074e4c6ae897d"} Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.028131 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af2921d3fd7738aee45bbc20ecc68879a53b725f98c855e7eb2074e4c6ae897d" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.028187 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4552" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.072716 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5db9x\" (UniqueName: \"kubernetes.io/projected/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-kube-api-access-5db9x\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.073493 5017 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.073599 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.073727 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.073813 5017 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.073889 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91ff38c8-c1d9-48bd-a593-829ed49d4c2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.688462 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-849cfbbc5-ctfjf"] Jan 29 06:53:57 crc kubenswrapper[5017]: E0129 06:53:57.688879 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ff38c8-c1d9-48bd-a593-829ed49d4c2d" containerName="keystone-bootstrap" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.688894 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ff38c8-c1d9-48bd-a593-829ed49d4c2d" containerName="keystone-bootstrap" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.689084 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ff38c8-c1d9-48bd-a593-829ed49d4c2d" containerName="keystone-bootstrap" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.690069 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.694538 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.695545 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.709562 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-849cfbbc5-ctfjf"] Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.786379 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-config\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.786435 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-ovndb-tls-certs\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.786469 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-combined-ca-bundle\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.786500 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-internal-tls-certs\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.786593 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-httpd-config\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.786623 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfp9\" (UniqueName: \"kubernetes.io/projected/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-kube-api-access-hqfp9\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.786649 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-public-tls-certs\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.888464 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-config\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.888524 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-ovndb-tls-certs\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.888566 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-combined-ca-bundle\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.888599 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-internal-tls-certs\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.888646 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-httpd-config\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.888671 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfp9\" (UniqueName: \"kubernetes.io/projected/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-kube-api-access-hqfp9\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.888705 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-public-tls-certs\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.894105 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-public-tls-certs\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.894701 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-internal-tls-certs\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.897545 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-ovndb-tls-certs\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.898096 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-combined-ca-bundle\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.899223 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-httpd-config\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.914543 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-config\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:57 crc kubenswrapper[5017]: I0129 06:53:57.920149 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfp9\" (UniqueName: \"kubernetes.io/projected/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-kube-api-access-hqfp9\") pod \"neutron-849cfbbc5-ctfjf\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.007901 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74d8b8b54b-w68vj"] Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.011008 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.014305 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.015611 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.015802 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rv2nk" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.017828 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.026125 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.028195 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74d8b8b54b-w68vj"] Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.033442 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.055164 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.092849 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-combined-ca-bundle\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.092925 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-fernet-keys\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.092948 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-scripts\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.093013 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-internal-tls-certs\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.093043 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-config-data\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.093068 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-public-tls-certs\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.093085 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-credential-keys\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.093271 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tqxn\" (UniqueName: \"kubernetes.io/projected/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-kube-api-access-5tqxn\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.195861 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-combined-ca-bundle\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.195945 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-fernet-keys\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.195994 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-scripts\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.196088 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-internal-tls-certs\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.196127 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-config-data\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.196165 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-public-tls-certs\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.196187 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-credential-keys\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.196234 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tqxn\" (UniqueName: \"kubernetes.io/projected/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-kube-api-access-5tqxn\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.200545 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-combined-ca-bundle\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.201199 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-public-tls-certs\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.202171 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-config-data\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.202526 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-internal-tls-certs\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.203324 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-credential-keys\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.204296 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-scripts\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.213794 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-fernet-keys\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.214505 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tqxn\" (UniqueName: \"kubernetes.io/projected/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-kube-api-access-5tqxn\") pod \"keystone-74d8b8b54b-w68vj\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.331929 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b944f8dd4-2rk49"] Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.358739 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.419129 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b9cd4b645-x8pg4"] Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.420813 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.437845 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b9cd4b645-x8pg4"] Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.505541 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-config\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.505628 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-ovndb-tls-certs\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.505655 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-public-tls-certs\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.505694 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-internal-tls-certs\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.505748 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-httpd-config\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.505797 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7ld\" (UniqueName: \"kubernetes.io/projected/c4fe6966-2467-4c3b-b907-d3a8e88eb497-kube-api-access-4t7ld\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.505822 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-combined-ca-bundle\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.610285 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-ovndb-tls-certs\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.610341 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-public-tls-certs\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.610373 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-internal-tls-certs\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.610421 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-httpd-config\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.610468 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t7ld\" (UniqueName: \"kubernetes.io/projected/c4fe6966-2467-4c3b-b907-d3a8e88eb497-kube-api-access-4t7ld\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.610487 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-combined-ca-bundle\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.610542 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-config\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.615123 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-public-tls-certs\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.616731 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-internal-tls-certs\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.617524 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-config\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.619857 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-combined-ca-bundle\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.620910 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-httpd-config\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.622028 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-ovndb-tls-certs\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.631158 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t7ld\" (UniqueName: \"kubernetes.io/projected/c4fe6966-2467-4c3b-b907-d3a8e88eb497-kube-api-access-4t7ld\") pod \"neutron-5b9cd4b645-x8pg4\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:58 crc kubenswrapper[5017]: I0129 06:53:58.755316 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.449244 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.449298 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.486354 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.494460 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.702737 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.741919 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-combined-ca-bundle\") pod \"af5eea12-20bf-45b6-b989-f77529ea2f04\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.742008 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w6jt\" (UniqueName: \"kubernetes.io/projected/af5eea12-20bf-45b6-b989-f77529ea2f04-kube-api-access-8w6jt\") pod \"af5eea12-20bf-45b6-b989-f77529ea2f04\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.742160 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-db-sync-config-data\") pod \"af5eea12-20bf-45b6-b989-f77529ea2f04\" (UID: \"af5eea12-20bf-45b6-b989-f77529ea2f04\") " Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.748100 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "af5eea12-20bf-45b6-b989-f77529ea2f04" (UID: "af5eea12-20bf-45b6-b989-f77529ea2f04"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.758378 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5eea12-20bf-45b6-b989-f77529ea2f04-kube-api-access-8w6jt" (OuterVolumeSpecName: "kube-api-access-8w6jt") pod "af5eea12-20bf-45b6-b989-f77529ea2f04" (UID: "af5eea12-20bf-45b6-b989-f77529ea2f04"). InnerVolumeSpecName "kube-api-access-8w6jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.796448 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af5eea12-20bf-45b6-b989-f77529ea2f04" (UID: "af5eea12-20bf-45b6-b989-f77529ea2f04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.847114 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.847572 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w6jt\" (UniqueName: \"kubernetes.io/projected/af5eea12-20bf-45b6-b989-f77529ea2f04-kube-api-access-8w6jt\") on node \"crc\" DevicePath \"\"" Jan 29 06:53:59 crc kubenswrapper[5017]: I0129 06:53:59.847584 5017 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/af5eea12-20bf-45b6-b989-f77529ea2f04-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.069293 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wjsjf" Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.069680 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wjsjf" event={"ID":"af5eea12-20bf-45b6-b989-f77529ea2f04","Type":"ContainerDied","Data":"61c28c89cb52f5d58983deca85a9c3b01059418b67e6ce0d6f64bfcc6566cbdf"} Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.069721 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61c28c89cb52f5d58983deca85a9c3b01059418b67e6ce0d6f64bfcc6566cbdf" Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.090071 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerStarted","Data":"cc2b213cdad66ca060e35d230675b54206effa2d060e5c8b4fe4a82718618f8c"} Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.090183 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.090386 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.282787 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8548d8d696-gk4rx"] Jan 29 06:54:00 crc kubenswrapper[5017]: W0129 06:54:00.294167 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod929c8bb1_1ca7_4593_b8f4_1e74f9702b57.slice/crio-c8ca0ce3ccac7f5529adbba1de049ad01dddc6450e36ccf4a04a79bb0c68fdd8 WatchSource:0}: Error finding container c8ca0ce3ccac7f5529adbba1de049ad01dddc6450e36ccf4a04a79bb0c68fdd8: Status 404 returned error can't find the container with id c8ca0ce3ccac7f5529adbba1de049ad01dddc6450e36ccf4a04a79bb0c68fdd8 Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.364082 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b944f8dd4-2rk49"] Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.418536 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-f78qp"] Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.427674 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74d8b8b54b-w68vj"] Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.504566 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b9cd4b645-x8pg4"] Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.989474 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-59754c55b6-52c5s"] Jan 29 06:54:00 crc kubenswrapper[5017]: E0129 06:54:00.990549 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5eea12-20bf-45b6-b989-f77529ea2f04" containerName="barbican-db-sync" Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.990563 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5eea12-20bf-45b6-b989-f77529ea2f04" containerName="barbican-db-sync" Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.990792 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5eea12-20bf-45b6-b989-f77529ea2f04" containerName="barbican-db-sync" Jan 29 06:54:00 crc kubenswrapper[5017]: I0129 06:54:00.992030 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.002064 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-59754c55b6-52c5s"] Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.005701 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.006773 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8t8cw" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.021869 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.023061 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.023131 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.023148 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.023174 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.050377 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7dd895bb69-2ngwr"] Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.075677 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.079662 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.103416 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.103528 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-combined-ca-bundle\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.103596 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.103680 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da406cff-454a-4287-a409-5ad51c535649-logs\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.103935 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgdx\" (UniqueName: \"kubernetes.io/projected/da406cff-454a-4287-a409-5ad51c535649-kube-api-access-mvgdx\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.103993 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data-custom\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.134369 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dd895bb69-2ngwr"] Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.175554 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8548d8d696-gk4rx" event={"ID":"929c8bb1-1ca7-4593-b8f4-1e74f9702b57","Type":"ContainerStarted","Data":"fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.175613 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8548d8d696-gk4rx" event={"ID":"929c8bb1-1ca7-4593-b8f4-1e74f9702b57","Type":"ContainerStarted","Data":"c8ca0ce3ccac7f5529adbba1de049ad01dddc6450e36ccf4a04a79bb0c68fdd8"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.204077 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.207432 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74d8b8b54b-w68vj" event={"ID":"55d2d70d-8578-47fc-a3a7-df7694c3f2a3","Type":"ContainerStarted","Data":"7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.207502 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74d8b8b54b-w68vj" event={"ID":"55d2d70d-8578-47fc-a3a7-df7694c3f2a3","Type":"ContainerStarted","Data":"7a46e79071a81f4c50cf3ec04293853707a5af6efc286f2b0229471a3f0428bd"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.207702 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtmsh\" (UniqueName: \"kubernetes.io/projected/c118297d-1c5d-4234-930c-9c0e6b5bb29b-kube-api-access-gtmsh\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.234139 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-combined-ca-bundle\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.234254 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-combined-ca-bundle\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.234311 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118297d-1c5d-4234-930c-9c0e6b5bb29b-logs\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.234337 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data-custom\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.234414 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.234684 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.234823 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da406cff-454a-4287-a409-5ad51c535649-logs\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.234993 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgdx\" (UniqueName: \"kubernetes.io/projected/da406cff-454a-4287-a409-5ad51c535649-kube-api-access-mvgdx\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.209137 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.236007 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9cd4b645-x8pg4" event={"ID":"c4fe6966-2467-4c3b-b907-d3a8e88eb497","Type":"ContainerStarted","Data":"f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.236069 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9cd4b645-x8pg4" event={"ID":"c4fe6966-2467-4c3b-b907-d3a8e88eb497","Type":"ContainerStarted","Data":"d0cacba71ede2c5bf64f44c7d63369cd1baf86505779cc7d41e57a8234c5da19"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.238108 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data-custom\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.239384 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b944f8dd4-2rk49" event={"ID":"a9431961-983b-4257-bbe6-cf1bac1261c0","Type":"ContainerStarted","Data":"45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.239475 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b944f8dd4-2rk49" event={"ID":"a9431961-983b-4257-bbe6-cf1bac1261c0","Type":"ContainerStarted","Data":"ee89885209f49accb6f3472c677e7e5c9f5dfa4e14311490a83d1cf8fdf548d7"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.244999 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da406cff-454a-4287-a409-5ad51c535649-logs\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.264436 5017 generic.go:334] "Generic (PLEG): container finished" podID="e9db5f91-efe7-4015-ad9b-882eecb4f8dd" containerID="40a858ca7296e36ee9740c2c7666efe576fdf6436ac9e368fa039373962f3402" exitCode=0 Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.266297 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-f78qp" event={"ID":"e9db5f91-efe7-4015-ad9b-882eecb4f8dd","Type":"ContainerDied","Data":"40a858ca7296e36ee9740c2c7666efe576fdf6436ac9e368fa039373962f3402"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.266333 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-f78qp"] Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.266353 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-f78qp" event={"ID":"e9db5f91-efe7-4015-ad9b-882eecb4f8dd","Type":"ContainerStarted","Data":"c86bd7ac9746da5781df0f3ffd4ebbe9de6a5b7fc89e276b83f22c3b46e38589"} Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.267417 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-combined-ca-bundle\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.281600 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgdx\" (UniqueName: \"kubernetes.io/projected/da406cff-454a-4287-a409-5ad51c535649-kube-api-access-mvgdx\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.284202 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.344170 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-combined-ca-bundle\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.344225 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118297d-1c5d-4234-930c-9c0e6b5bb29b-logs\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.344246 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data-custom\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.344380 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.344531 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtmsh\" (UniqueName: \"kubernetes.io/projected/c118297d-1c5d-4234-930c-9c0e6b5bb29b-kube-api-access-gtmsh\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.351947 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data-custom\") pod \"barbican-keystone-listener-59754c55b6-52c5s\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.354623 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118297d-1c5d-4234-930c-9c0e6b5bb29b-logs\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.362251 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data-custom\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.362560 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-combined-ca-bundle\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.391041 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-zns8c"] Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.392862 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.404006 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtmsh\" (UniqueName: \"kubernetes.io/projected/c118297d-1c5d-4234-930c-9c0e6b5bb29b-kube-api-access-gtmsh\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: E0129 06:54:01.416601 5017 mount_linux.go:282] Mount failed: exit status 32 Jan 29 06:54:01 crc kubenswrapper[5017]: Mounting command: mount Jan 29 06:54:01 crc kubenswrapper[5017]: Mounting arguments: --no-canonicalize -o bind /proc/5017/fd/28 /var/lib/kubelet/pods/e9db5f91-efe7-4015-ad9b-882eecb4f8dd/volume-subpaths/dns-svc/dnsmasq-dns/1 Jan 29 06:54:01 crc kubenswrapper[5017]: Output: mount: /var/lib/kubelet/pods/e9db5f91-efe7-4015-ad9b-882eecb4f8dd/volume-subpaths/dns-svc/dnsmasq-dns/1: mount(2) system call failed: No such file or directory. Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.430636 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-zns8c"] Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.434781 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data\") pod \"barbican-worker-7dd895bb69-2ngwr\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: E0129 06:54:01.497619 5017 kubelet_pods.go:349] "Failed to prepare subPath for volumeMount of the container" err=< Jan 29 06:54:01 crc kubenswrapper[5017]: error mounting /var/lib/kubelet/pods/e9db5f91-efe7-4015-ad9b-882eecb4f8dd/volumes/kubernetes.io~configmap/dns-svc/..2026_01_29_06_53_55.2143207188/dns-svc: mount failed: exit status 32 Jan 29 06:54:01 crc kubenswrapper[5017]: Mounting command: mount Jan 29 06:54:01 crc kubenswrapper[5017]: Mounting arguments: --no-canonicalize -o bind /proc/5017/fd/28 /var/lib/kubelet/pods/e9db5f91-efe7-4015-ad9b-882eecb4f8dd/volume-subpaths/dns-svc/dnsmasq-dns/1 Jan 29 06:54:01 crc kubenswrapper[5017]: Output: mount: /var/lib/kubelet/pods/e9db5f91-efe7-4015-ad9b-882eecb4f8dd/volume-subpaths/dns-svc/dnsmasq-dns/1: mount(2) system call failed: No such file or directory. Jan 29 06:54:01 crc kubenswrapper[5017]: > containerName="dnsmasq-dns" volumeMountName="dns-svc" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.497825 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74d8b8b54b-w68vj" podStartSLOduration=4.497799975 podStartE2EDuration="4.497799975s" podCreationTimestamp="2026-01-29 06:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:01.321407108 +0000 UTC m=+1127.695854718" watchObservedRunningTime="2026-01-29 06:54:01.497799975 +0000 UTC m=+1127.872247585" Jan 29 06:54:01 crc kubenswrapper[5017]: E0129 06:54:01.497830 5017 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n587h5c6h66fhc7hcch59hffh5f4h665h5cdh57h68dh66bh5f9hcbh96h6fh578h56dh656h658h76h65ch68fh655h5bbh4h6fh5c5h668h697hfdq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwjs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-685444497c-f78qp_openstack(e9db5f91-efe7-4015-ad9b-882eecb4f8dd): CreateContainerConfigError: failed to prepare subPath for volumeMount \"dns-svc\" of container \"dnsmasq-dns\"" logger="UnhandledError" Jan 29 06:54:01 crc kubenswrapper[5017]: E0129 06:54:01.500104 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerConfigError: \"failed to prepare subPath for volumeMount \\\"dns-svc\\\" of container \\\"dnsmasq-dns\\\"\"" pod="openstack/dnsmasq-dns-685444497c-f78qp" podUID="e9db5f91-efe7-4015-ad9b-882eecb4f8dd" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.503347 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.540091 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.564912 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.565103 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.565191 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.565301 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.565415 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-config\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.565523 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlcc\" (UniqueName: \"kubernetes.io/projected/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-kube-api-access-cqlcc\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.578093 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74b97c48c4-nf2gp"] Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.580067 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.587454 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.615907 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74b97c48c4-nf2gp"] Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.668328 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.668409 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.668452 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.668510 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.668564 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-config\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.668601 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqlcc\" (UniqueName: \"kubernetes.io/projected/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-kube-api-access-cqlcc\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.671137 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.672464 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.673058 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.673627 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-config\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.674223 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.695799 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqlcc\" (UniqueName: \"kubernetes.io/projected/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-kube-api-access-cqlcc\") pod \"dnsmasq-dns-66cdd4b5b5-zns8c\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.700679 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-849cfbbc5-ctfjf"] Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.707696 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.776114 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e995b541-6d6d-4dbd-a7d6-e4b607becac7-logs\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.776205 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.776240 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-combined-ca-bundle\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.776368 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqpdc\" (UniqueName: \"kubernetes.io/projected/e995b541-6d6d-4dbd-a7d6-e4b607becac7-kube-api-access-sqpdc\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.776503 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data-custom\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.878143 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqpdc\" (UniqueName: \"kubernetes.io/projected/e995b541-6d6d-4dbd-a7d6-e4b607becac7-kube-api-access-sqpdc\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.878271 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data-custom\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.879153 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e995b541-6d6d-4dbd-a7d6-e4b607becac7-logs\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.879475 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.882490 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e995b541-6d6d-4dbd-a7d6-e4b607becac7-logs\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.882654 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-combined-ca-bundle\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.886526 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-combined-ca-bundle\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.907273 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqpdc\" (UniqueName: \"kubernetes.io/projected/e995b541-6d6d-4dbd-a7d6-e4b607becac7-kube-api-access-sqpdc\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.914309 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data-custom\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:01 crc kubenswrapper[5017]: I0129 06:54:01.924766 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data\") pod \"barbican-api-74b97c48c4-nf2gp\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.038625 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.061182 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dd895bb69-2ngwr"] Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.388035 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b944f8dd4-2rk49" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerName="neutron-api" containerID="cri-o://45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00" gracePeriod=30 Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.388868 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.388889 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389157 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dd895bb69-2ngwr" event={"ID":"c118297d-1c5d-4234-930c-9c0e6b5bb29b","Type":"ContainerStarted","Data":"5719ab7c6b932755168b57ebbb704c2aa3a3d0069b5e7afb8cdc762eb568de23"} Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389217 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8548d8d696-gk4rx" event={"ID":"929c8bb1-1ca7-4593-b8f4-1e74f9702b57","Type":"ContainerStarted","Data":"8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8"} Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389242 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389270 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389282 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389292 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9cd4b645-x8pg4" event={"ID":"c4fe6966-2467-4c3b-b907-d3a8e88eb497","Type":"ContainerStarted","Data":"351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b"} Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389301 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849cfbbc5-ctfjf" event={"ID":"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e","Type":"ContainerStarted","Data":"6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024"} Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389315 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849cfbbc5-ctfjf" event={"ID":"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e","Type":"ContainerStarted","Data":"362770a655341470885508ea66bdb58d7e182ee582f950977cdc04ef72c5ae00"} Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389325 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b944f8dd4-2rk49" event={"ID":"a9431961-983b-4257-bbe6-cf1bac1261c0","Type":"ContainerStarted","Data":"77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4"} Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.389756 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b944f8dd4-2rk49" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerName="neutron-httpd" containerID="cri-o://77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4" gracePeriod=30 Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.390213 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.421768 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8548d8d696-gk4rx" podStartSLOduration=7.4217394070000005 podStartE2EDuration="7.421739407s" podCreationTimestamp="2026-01-29 06:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:02.40444367 +0000 UTC m=+1128.778891280" watchObservedRunningTime="2026-01-29 06:54:02.421739407 +0000 UTC m=+1128.796187017" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.452931 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-59754c55b6-52c5s"] Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.469776 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b9cd4b645-x8pg4" podStartSLOduration=4.469749133 podStartE2EDuration="4.469749133s" podCreationTimestamp="2026-01-29 06:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:02.456537767 +0000 UTC m=+1128.830985387" watchObservedRunningTime="2026-01-29 06:54:02.469749133 +0000 UTC m=+1128.844196743" Jan 29 06:54:02 crc kubenswrapper[5017]: W0129 06:54:02.480365 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda406cff_454a_4287_a409_5ad51c535649.slice/crio-08c84617f8fd7d5dedc44bf1a7ce27c779bd9bf599b8449d46fa4ea59ab7127c WatchSource:0}: Error finding container 08c84617f8fd7d5dedc44bf1a7ce27c779bd9bf599b8449d46fa4ea59ab7127c: Status 404 returned error can't find the container with id 08c84617f8fd7d5dedc44bf1a7ce27c779bd9bf599b8449d46fa4ea59ab7127c Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.490869 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b944f8dd4-2rk49" podStartSLOduration=7.490851434 podStartE2EDuration="7.490851434s" podCreationTimestamp="2026-01-29 06:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:02.489437889 +0000 UTC m=+1128.863885489" watchObservedRunningTime="2026-01-29 06:54:02.490851434 +0000 UTC m=+1128.865299044" Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.747082 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-zns8c"] Jan 29 06:54:02 crc kubenswrapper[5017]: I0129 06:54:02.975718 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74b97c48c4-nf2gp"] Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.256425 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.378855 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-swift-storage-0\") pod \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.378938 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwjs9\" (UniqueName: \"kubernetes.io/projected/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-kube-api-access-mwjs9\") pod \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.379117 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-svc\") pod \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.379191 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-sb\") pod \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.379281 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-nb\") pod \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.379527 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-config\") pod \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\" (UID: \"e9db5f91-efe7-4015-ad9b-882eecb4f8dd\") " Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.396139 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-kube-api-access-mwjs9" (OuterVolumeSpecName: "kube-api-access-mwjs9") pod "e9db5f91-efe7-4015-ad9b-882eecb4f8dd" (UID: "e9db5f91-efe7-4015-ad9b-882eecb4f8dd"). InnerVolumeSpecName "kube-api-access-mwjs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.430499 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e9db5f91-efe7-4015-ad9b-882eecb4f8dd" (UID: "e9db5f91-efe7-4015-ad9b-882eecb4f8dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.436086 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9db5f91-efe7-4015-ad9b-882eecb4f8dd" (UID: "e9db5f91-efe7-4015-ad9b-882eecb4f8dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.436427 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" event={"ID":"0ffc178e-6c5b-45ce-8810-2e897e83a0a9","Type":"ContainerStarted","Data":"fc1afc6a4c9a2400aec371fe46caa6d82b0a07ab5931254d4eb86b29d6e5c6e3"} Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.437491 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e9db5f91-efe7-4015-ad9b-882eecb4f8dd" (UID: "e9db5f91-efe7-4015-ad9b-882eecb4f8dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.449388 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-f78qp" event={"ID":"e9db5f91-efe7-4015-ad9b-882eecb4f8dd","Type":"ContainerDied","Data":"c86bd7ac9746da5781df0f3ffd4ebbe9de6a5b7fc89e276b83f22c3b46e38589"} Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.449461 5017 scope.go:117] "RemoveContainer" containerID="40a858ca7296e36ee9740c2c7666efe576fdf6436ac9e368fa039373962f3402" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.450461 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-f78qp" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.456694 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-config" (OuterVolumeSpecName: "config") pod "e9db5f91-efe7-4015-ad9b-882eecb4f8dd" (UID: "e9db5f91-efe7-4015-ad9b-882eecb4f8dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.465522 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e9db5f91-efe7-4015-ad9b-882eecb4f8dd" (UID: "e9db5f91-efe7-4015-ad9b-882eecb4f8dd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.465926 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b97c48c4-nf2gp" event={"ID":"e995b541-6d6d-4dbd-a7d6-e4b607becac7","Type":"ContainerStarted","Data":"7ff78da29f8556174fc3edb0e1a1508f3243aa4ef1c542e1cf801a49bd99f426"} Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.477898 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849cfbbc5-ctfjf" event={"ID":"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e","Type":"ContainerStarted","Data":"e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812"} Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.479749 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.482613 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.482636 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.482648 5017 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.482663 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwjs9\" (UniqueName: \"kubernetes.io/projected/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-kube-api-access-mwjs9\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.482672 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.482682 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9db5f91-efe7-4015-ad9b-882eecb4f8dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.492110 5017 generic.go:334] "Generic (PLEG): container finished" podID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerID="77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4" exitCode=0 Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.492264 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b944f8dd4-2rk49" event={"ID":"a9431961-983b-4257-bbe6-cf1bac1261c0","Type":"ContainerDied","Data":"77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4"} Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.511903 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-849cfbbc5-ctfjf" podStartSLOduration=6.511878575 podStartE2EDuration="6.511878575s" podCreationTimestamp="2026-01-29 06:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:03.504165474 +0000 UTC m=+1129.878613084" watchObservedRunningTime="2026-01-29 06:54:03.511878575 +0000 UTC m=+1129.886326185" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.512936 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" event={"ID":"da406cff-454a-4287-a409-5ad51c535649","Type":"ContainerStarted","Data":"08c84617f8fd7d5dedc44bf1a7ce27c779bd9bf599b8449d46fa4ea59ab7127c"} Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.612552 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.612673 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.614320 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.853032 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-f78qp"] Jan 29 06:54:03 crc kubenswrapper[5017]: I0129 06:54:03.875635 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685444497c-f78qp"] Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.358397 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9db5f91-efe7-4015-ad9b-882eecb4f8dd" path="/var/lib/kubelet/pods/e9db5f91-efe7-4015-ad9b-882eecb4f8dd/volumes" Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.538981 5017 generic.go:334] "Generic (PLEG): container finished" podID="0ffc178e-6c5b-45ce-8810-2e897e83a0a9" containerID="3b0289e026ee92b6b613d1e37a0a763baa3f977fc11e6aff9ad6896d764b144e" exitCode=0 Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.539057 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" event={"ID":"0ffc178e-6c5b-45ce-8810-2e897e83a0a9","Type":"ContainerDied","Data":"3b0289e026ee92b6b613d1e37a0a763baa3f977fc11e6aff9ad6896d764b144e"} Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.621405 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b97c48c4-nf2gp" event={"ID":"e995b541-6d6d-4dbd-a7d6-e4b607becac7","Type":"ContainerStarted","Data":"370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc"} Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.622212 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b97c48c4-nf2gp" event={"ID":"e995b541-6d6d-4dbd-a7d6-e4b607becac7","Type":"ContainerStarted","Data":"e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211"} Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.623070 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.623175 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.662637 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c99rc" event={"ID":"31e5ea57-0c73-4c76-bbcb-6d3b665b6226","Type":"ContainerStarted","Data":"c3626a55e2e2035ab1274eb197cb04be9df1a946c7042639ef0362e5a1e980b2"} Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.728802 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-c99rc" podStartSLOduration=4.49991221 podStartE2EDuration="40.728778143s" podCreationTimestamp="2026-01-29 06:53:24 +0000 UTC" firstStartedPulling="2026-01-29 06:53:25.911151956 +0000 UTC m=+1092.285599556" lastFinishedPulling="2026-01-29 06:54:02.140017879 +0000 UTC m=+1128.514465489" observedRunningTime="2026-01-29 06:54:04.72459925 +0000 UTC m=+1131.099046860" watchObservedRunningTime="2026-01-29 06:54:04.728778143 +0000 UTC m=+1131.103225743" Jan 29 06:54:04 crc kubenswrapper[5017]: I0129 06:54:04.731152 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74b97c48c4-nf2gp" podStartSLOduration=3.731145041 podStartE2EDuration="3.731145041s" podCreationTimestamp="2026-01-29 06:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:04.687273667 +0000 UTC m=+1131.061721277" watchObservedRunningTime="2026-01-29 06:54:04.731145041 +0000 UTC m=+1131.105592651" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.212869 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-544777f6b8-l4dw8"] Jan 29 06:54:05 crc kubenswrapper[5017]: E0129 06:54:05.213334 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9db5f91-efe7-4015-ad9b-882eecb4f8dd" containerName="init" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.213360 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9db5f91-efe7-4015-ad9b-882eecb4f8dd" containerName="init" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.213595 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9db5f91-efe7-4015-ad9b-882eecb4f8dd" containerName="init" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.214592 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.218715 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.219882 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.246065 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.246169 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data-custom\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.246205 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-combined-ca-bundle\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.246248 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-internal-tls-certs\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.246297 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlpp9\" (UniqueName: \"kubernetes.io/projected/919074d0-f7a7-4d64-8339-744730688c4f-kube-api-access-hlpp9\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.246343 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-public-tls-certs\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.246381 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/919074d0-f7a7-4d64-8339-744730688c4f-logs\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.288339 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-544777f6b8-l4dw8"] Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.348587 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/919074d0-f7a7-4d64-8339-744730688c4f-logs\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.348713 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.348791 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data-custom\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.348809 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-combined-ca-bundle\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.348851 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-internal-tls-certs\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.348896 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlpp9\" (UniqueName: \"kubernetes.io/projected/919074d0-f7a7-4d64-8339-744730688c4f-kube-api-access-hlpp9\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.348928 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-public-tls-certs\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.353266 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/919074d0-f7a7-4d64-8339-744730688c4f-logs\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.357748 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-internal-tls-certs\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.359698 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data-custom\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.361635 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-combined-ca-bundle\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.362177 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-public-tls-certs\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.373478 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlpp9\" (UniqueName: \"kubernetes.io/projected/919074d0-f7a7-4d64-8339-744730688c4f-kube-api-access-hlpp9\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.397865 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data\") pod \"barbican-api-544777f6b8-l4dw8\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.547212 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.595355 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.595533 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:54:05 crc kubenswrapper[5017]: I0129 06:54:05.650899 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.322085 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-544777f6b8-l4dw8"] Jan 29 06:54:07 crc kubenswrapper[5017]: W0129 06:54:07.323801 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod919074d0_f7a7_4d64_8339_744730688c4f.slice/crio-b9d49293b936127c8781c9b57a3b8fd3a124e2e46c18bf13435d389658df75d9 WatchSource:0}: Error finding container b9d49293b936127c8781c9b57a3b8fd3a124e2e46c18bf13435d389658df75d9: Status 404 returned error can't find the container with id b9d49293b936127c8781c9b57a3b8fd3a124e2e46c18bf13435d389658df75d9 Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.726105 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-544777f6b8-l4dw8" event={"ID":"919074d0-f7a7-4d64-8339-744730688c4f","Type":"ContainerStarted","Data":"9a97268bf48d3ec39b862fe626a036c8b8bf71b7f2894c8b7bf80ae0f1be7da0"} Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.726161 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-544777f6b8-l4dw8" event={"ID":"919074d0-f7a7-4d64-8339-744730688c4f","Type":"ContainerStarted","Data":"b9d49293b936127c8781c9b57a3b8fd3a124e2e46c18bf13435d389658df75d9"} Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.729468 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" event={"ID":"da406cff-454a-4287-a409-5ad51c535649","Type":"ContainerStarted","Data":"2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc"} Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.729533 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" event={"ID":"da406cff-454a-4287-a409-5ad51c535649","Type":"ContainerStarted","Data":"ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f"} Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.735366 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" event={"ID":"0ffc178e-6c5b-45ce-8810-2e897e83a0a9","Type":"ContainerStarted","Data":"e595707fc1f32789f20572d55b0559c073283ec2c4f419902fbc8d84df70ec7b"} Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.735864 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.739296 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dd895bb69-2ngwr" event={"ID":"c118297d-1c5d-4234-930c-9c0e6b5bb29b","Type":"ContainerStarted","Data":"d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e"} Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.739339 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dd895bb69-2ngwr" event={"ID":"c118297d-1c5d-4234-930c-9c0e6b5bb29b","Type":"ContainerStarted","Data":"97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3"} Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.771509 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" podStartSLOduration=3.493317257 podStartE2EDuration="7.771459909s" podCreationTimestamp="2026-01-29 06:54:00 +0000 UTC" firstStartedPulling="2026-01-29 06:54:02.492623279 +0000 UTC m=+1128.867070889" lastFinishedPulling="2026-01-29 06:54:06.770765941 +0000 UTC m=+1133.145213541" observedRunningTime="2026-01-29 06:54:07.76060527 +0000 UTC m=+1134.135052880" watchObservedRunningTime="2026-01-29 06:54:07.771459909 +0000 UTC m=+1134.145907539" Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.794460 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7dd895bb69-2ngwr" podStartSLOduration=3.18178442 podStartE2EDuration="7.794440306s" podCreationTimestamp="2026-01-29 06:54:00 +0000 UTC" firstStartedPulling="2026-01-29 06:54:02.173154137 +0000 UTC m=+1128.547601747" lastFinishedPulling="2026-01-29 06:54:06.785810023 +0000 UTC m=+1133.160257633" observedRunningTime="2026-01-29 06:54:07.789809962 +0000 UTC m=+1134.164257572" watchObservedRunningTime="2026-01-29 06:54:07.794440306 +0000 UTC m=+1134.168887916" Jan 29 06:54:07 crc kubenswrapper[5017]: I0129 06:54:07.815545 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" podStartSLOduration=6.815524057 podStartE2EDuration="6.815524057s" podCreationTimestamp="2026-01-29 06:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:07.813411145 +0000 UTC m=+1134.187858755" watchObservedRunningTime="2026-01-29 06:54:07.815524057 +0000 UTC m=+1134.189971667" Jan 29 06:54:08 crc kubenswrapper[5017]: I0129 06:54:08.752558 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-544777f6b8-l4dw8" event={"ID":"919074d0-f7a7-4d64-8339-744730688c4f","Type":"ContainerStarted","Data":"290c7cb878f017a867ee8ae761d80813ae152cf14f4cb08011870623faa5a09c"} Jan 29 06:54:08 crc kubenswrapper[5017]: I0129 06:54:08.786391 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-544777f6b8-l4dw8" podStartSLOduration=3.786345207 podStartE2EDuration="3.786345207s" podCreationTimestamp="2026-01-29 06:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:08.779265802 +0000 UTC m=+1135.153713432" watchObservedRunningTime="2026-01-29 06:54:08.786345207 +0000 UTC m=+1135.160792817" Jan 29 06:54:09 crc kubenswrapper[5017]: I0129 06:54:09.764046 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:09 crc kubenswrapper[5017]: I0129 06:54:09.764396 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:10 crc kubenswrapper[5017]: I0129 06:54:10.782206 5017 generic.go:334] "Generic (PLEG): container finished" podID="31e5ea57-0c73-4c76-bbcb-6d3b665b6226" containerID="c3626a55e2e2035ab1274eb197cb04be9df1a946c7042639ef0362e5a1e980b2" exitCode=0 Jan 29 06:54:10 crc kubenswrapper[5017]: I0129 06:54:10.782297 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c99rc" event={"ID":"31e5ea57-0c73-4c76-bbcb-6d3b665b6226","Type":"ContainerDied","Data":"c3626a55e2e2035ab1274eb197cb04be9df1a946c7042639ef0362e5a1e980b2"} Jan 29 06:54:13 crc kubenswrapper[5017]: I0129 06:54:13.543464 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:13 crc kubenswrapper[5017]: I0129 06:54:13.647853 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.190987 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c99rc" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.272875 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-etc-machine-id\") pod \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.273396 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-config-data\") pod \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.273508 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-db-sync-config-data\") pod \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.273622 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "31e5ea57-0c73-4c76-bbcb-6d3b665b6226" (UID: "31e5ea57-0c73-4c76-bbcb-6d3b665b6226"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.273756 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-combined-ca-bundle\") pod \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.273823 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-scripts\") pod \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.273870 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xdz4\" (UniqueName: \"kubernetes.io/projected/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-kube-api-access-6xdz4\") pod \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\" (UID: \"31e5ea57-0c73-4c76-bbcb-6d3b665b6226\") " Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.274317 5017 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.284018 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "31e5ea57-0c73-4c76-bbcb-6d3b665b6226" (UID: "31e5ea57-0c73-4c76-bbcb-6d3b665b6226"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.284324 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-scripts" (OuterVolumeSpecName: "scripts") pod "31e5ea57-0c73-4c76-bbcb-6d3b665b6226" (UID: "31e5ea57-0c73-4c76-bbcb-6d3b665b6226"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.291939 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-kube-api-access-6xdz4" (OuterVolumeSpecName: "kube-api-access-6xdz4") pod "31e5ea57-0c73-4c76-bbcb-6d3b665b6226" (UID: "31e5ea57-0c73-4c76-bbcb-6d3b665b6226"). InnerVolumeSpecName "kube-api-access-6xdz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.327158 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31e5ea57-0c73-4c76-bbcb-6d3b665b6226" (UID: "31e5ea57-0c73-4c76-bbcb-6d3b665b6226"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.350704 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-config-data" (OuterVolumeSpecName: "config-data") pod "31e5ea57-0c73-4c76-bbcb-6d3b665b6226" (UID: "31e5ea57-0c73-4c76-bbcb-6d3b665b6226"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.376723 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xdz4\" (UniqueName: \"kubernetes.io/projected/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-kube-api-access-6xdz4\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.376760 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.376771 5017 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.376782 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.376791 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e5ea57-0c73-4c76-bbcb-6d3b665b6226-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.833619 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerStarted","Data":"98e7d947d63316f6e727592605d37b6ca781155f08d8240ade6dd1626b1eddf7"} Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.833903 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.834199 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="ceilometer-notification-agent" containerID="cri-o://2a7de7ba906eb493a55aae1e90816a05485d4884963524286fe87604184303b6" gracePeriod=30 Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.834217 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="proxy-httpd" containerID="cri-o://98e7d947d63316f6e727592605d37b6ca781155f08d8240ade6dd1626b1eddf7" gracePeriod=30 Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.834192 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="sg-core" containerID="cri-o://cc2b213cdad66ca060e35d230675b54206effa2d060e5c8b4fe4a82718618f8c" gracePeriod=30 Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.834536 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="ceilometer-central-agent" containerID="cri-o://3532343237aa5a78f92f88f321a808014a552e93f04b7c2ba2f4c217e74783c7" gracePeriod=30 Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.838449 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c99rc" event={"ID":"31e5ea57-0c73-4c76-bbcb-6d3b665b6226","Type":"ContainerDied","Data":"ef233d6afa94d0d61415e612f31f562704cb3ee6431670bbd5b96b9258cd885c"} Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.838521 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef233d6afa94d0d61415e612f31f562704cb3ee6431670bbd5b96b9258cd885c" Jan 29 06:54:14 crc kubenswrapper[5017]: I0129 06:54:14.838571 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c99rc" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.252126 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.625595322 podStartE2EDuration="51.252107855s" podCreationTimestamp="2026-01-29 06:53:24 +0000 UTC" firstStartedPulling="2026-01-29 06:53:26.425217421 +0000 UTC m=+1092.799665031" lastFinishedPulling="2026-01-29 06:54:14.051729954 +0000 UTC m=+1140.426177564" observedRunningTime="2026-01-29 06:54:14.865260279 +0000 UTC m=+1141.239707889" watchObservedRunningTime="2026-01-29 06:54:15.252107855 +0000 UTC m=+1141.626555465" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.505431 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:54:15 crc kubenswrapper[5017]: E0129 06:54:15.505859 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e5ea57-0c73-4c76-bbcb-6d3b665b6226" containerName="cinder-db-sync" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.505881 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e5ea57-0c73-4c76-bbcb-6d3b665b6226" containerName="cinder-db-sync" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.506113 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e5ea57-0c73-4c76-bbcb-6d3b665b6226" containerName="cinder-db-sync" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.507196 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.510378 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.510459 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.510637 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.511665 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xq7sw" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.561810 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.606026 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbk2g\" (UniqueName: \"kubernetes.io/projected/b8470712-09f7-4366-a6fd-ab5dbc3c3192-kube-api-access-mbk2g\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.606290 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8470712-09f7-4366-a6fd-ab5dbc3c3192-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.607157 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-scripts\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.607221 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.607312 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.607427 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.681346 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-zns8c"] Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.682193 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" podUID="0ffc178e-6c5b-45ce-8810-2e897e83a0a9" containerName="dnsmasq-dns" containerID="cri-o://e595707fc1f32789f20572d55b0559c073283ec2c4f419902fbc8d84df70ec7b" gracePeriod=10 Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.685185 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.712217 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8470712-09f7-4366-a6fd-ab5dbc3c3192-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.712324 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-scripts\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.712345 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.712376 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.712396 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.712438 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbk2g\" (UniqueName: \"kubernetes.io/projected/b8470712-09f7-4366-a6fd-ab5dbc3c3192-kube-api-access-mbk2g\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.712833 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8470712-09f7-4366-a6fd-ab5dbc3c3192-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.725858 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-scripts\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.730728 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.735942 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.736750 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.778850 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbk2g\" (UniqueName: \"kubernetes.io/projected/b8470712-09f7-4366-a6fd-ab5dbc3c3192-kube-api-access-mbk2g\") pod \"cinder-scheduler-0\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.779403 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-zsnd5"] Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.781490 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.822048 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-zsnd5"] Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.824318 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.832864 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.835317 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.840974 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.860058 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.888662 5017 generic.go:334] "Generic (PLEG): container finished" podID="0ffc178e-6c5b-45ce-8810-2e897e83a0a9" containerID="e595707fc1f32789f20572d55b0559c073283ec2c4f419902fbc8d84df70ec7b" exitCode=0 Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.888759 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" event={"ID":"0ffc178e-6c5b-45ce-8810-2e897e83a0a9","Type":"ContainerDied","Data":"e595707fc1f32789f20572d55b0559c073283ec2c4f419902fbc8d84df70ec7b"} Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.915690 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.915878 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-config\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.916946 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data-custom\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.917178 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1646d23-d933-405f-a4d1-02f53682ae8f-logs\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.917308 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.917520 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.917648 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gps\" (UniqueName: \"kubernetes.io/projected/223272bf-db73-426c-ad7e-78093ad4316a-kube-api-access-n4gps\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.917752 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.917832 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1646d23-d933-405f-a4d1-02f53682ae8f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.917886 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.918002 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh2sv\" (UniqueName: \"kubernetes.io/projected/b1646d23-d933-405f-a4d1-02f53682ae8f-kube-api-access-kh2sv\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.918206 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.918317 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-scripts\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.924576 5017 generic.go:334] "Generic (PLEG): container finished" podID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerID="98e7d947d63316f6e727592605d37b6ca781155f08d8240ade6dd1626b1eddf7" exitCode=0 Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.924611 5017 generic.go:334] "Generic (PLEG): container finished" podID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerID="cc2b213cdad66ca060e35d230675b54206effa2d060e5c8b4fe4a82718618f8c" exitCode=2 Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.924619 5017 generic.go:334] "Generic (PLEG): container finished" podID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerID="3532343237aa5a78f92f88f321a808014a552e93f04b7c2ba2f4c217e74783c7" exitCode=0 Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.924643 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerDied","Data":"98e7d947d63316f6e727592605d37b6ca781155f08d8240ade6dd1626b1eddf7"} Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.924675 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerDied","Data":"cc2b213cdad66ca060e35d230675b54206effa2d060e5c8b4fe4a82718618f8c"} Jan 29 06:54:15 crc kubenswrapper[5017]: I0129 06:54:15.924688 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerDied","Data":"3532343237aa5a78f92f88f321a808014a552e93f04b7c2ba2f4c217e74783c7"} Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.021124 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data-custom\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.021849 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1646d23-d933-405f-a4d1-02f53682ae8f-logs\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.021884 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.024911 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.024945 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gps\" (UniqueName: \"kubernetes.io/projected/223272bf-db73-426c-ad7e-78093ad4316a-kube-api-access-n4gps\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.025013 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.025032 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1646d23-d933-405f-a4d1-02f53682ae8f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.025052 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.025156 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh2sv\" (UniqueName: \"kubernetes.io/projected/b1646d23-d933-405f-a4d1-02f53682ae8f-kube-api-access-kh2sv\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.025235 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.025257 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-scripts\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.025339 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.025364 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-config\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.026840 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1646d23-d933-405f-a4d1-02f53682ae8f-logs\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.027050 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1646d23-d933-405f-a4d1-02f53682ae8f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.030561 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.036970 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-config\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.037684 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data-custom\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.041064 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.041178 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.042941 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-scripts\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.047933 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.054029 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.055895 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gps\" (UniqueName: \"kubernetes.io/projected/223272bf-db73-426c-ad7e-78093ad4316a-kube-api-access-n4gps\") pod \"dnsmasq-dns-75dbb546bf-zsnd5\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.056142 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.062018 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh2sv\" (UniqueName: \"kubernetes.io/projected/b1646d23-d933-405f-a4d1-02f53682ae8f-kube-api-access-kh2sv\") pod \"cinder-api-0\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.204637 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.233419 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.367270 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.462168 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-sb\") pod \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.462845 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqlcc\" (UniqueName: \"kubernetes.io/projected/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-kube-api-access-cqlcc\") pod \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.462892 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-config\") pod \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.463017 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-svc\") pod \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.463117 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-nb\") pod \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.463256 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-swift-storage-0\") pod \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\" (UID: \"0ffc178e-6c5b-45ce-8810-2e897e83a0a9\") " Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.525959 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-kube-api-access-cqlcc" (OuterVolumeSpecName: "kube-api-access-cqlcc") pod "0ffc178e-6c5b-45ce-8810-2e897e83a0a9" (UID: "0ffc178e-6c5b-45ce-8810-2e897e83a0a9"). InnerVolumeSpecName "kube-api-access-cqlcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.613396 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqlcc\" (UniqueName: \"kubernetes.io/projected/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-kube-api-access-cqlcc\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.668757 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.711365 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ffc178e-6c5b-45ce-8810-2e897e83a0a9" (UID: "0ffc178e-6c5b-45ce-8810-2e897e83a0a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.715488 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.716722 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ffc178e-6c5b-45ce-8810-2e897e83a0a9" (UID: "0ffc178e-6c5b-45ce-8810-2e897e83a0a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.731074 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-config" (OuterVolumeSpecName: "config") pod "0ffc178e-6c5b-45ce-8810-2e897e83a0a9" (UID: "0ffc178e-6c5b-45ce-8810-2e897e83a0a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.775884 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ffc178e-6c5b-45ce-8810-2e897e83a0a9" (UID: "0ffc178e-6c5b-45ce-8810-2e897e83a0a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.776476 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ffc178e-6c5b-45ce-8810-2e897e83a0a9" (UID: "0ffc178e-6c5b-45ce-8810-2e897e83a0a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.826292 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.826757 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.826769 5017 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.826779 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ffc178e-6c5b-45ce-8810-2e897e83a0a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.942486 5017 generic.go:334] "Generic (PLEG): container finished" podID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerID="2a7de7ba906eb493a55aae1e90816a05485d4884963524286fe87604184303b6" exitCode=0 Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.942569 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerDied","Data":"2a7de7ba906eb493a55aae1e90816a05485d4884963524286fe87604184303b6"} Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.945693 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b8470712-09f7-4366-a6fd-ab5dbc3c3192","Type":"ContainerStarted","Data":"fbb069925f9d0b9a0f77ce849be2a9b24c9d3abfb2b2a1e5fd8e3f23d16d6569"} Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.948815 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" event={"ID":"0ffc178e-6c5b-45ce-8810-2e897e83a0a9","Type":"ContainerDied","Data":"fc1afc6a4c9a2400aec371fe46caa6d82b0a07ab5931254d4eb86b29d6e5c6e3"} Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.948867 5017 scope.go:117] "RemoveContainer" containerID="e595707fc1f32789f20572d55b0559c073283ec2c4f419902fbc8d84df70ec7b" Jan 29 06:54:16 crc kubenswrapper[5017]: I0129 06:54:16.949197 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-zns8c" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.064357 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-zns8c"] Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.075870 5017 scope.go:117] "RemoveContainer" containerID="3b0289e026ee92b6b613d1e37a0a763baa3f977fc11e6aff9ad6896d764b144e" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.096862 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-zns8c"] Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.112759 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-zsnd5"] Jan 29 06:54:17 crc kubenswrapper[5017]: W0129 06:54:17.134669 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod223272bf_db73_426c_ad7e_78093ad4316a.slice/crio-5ec1f2da8d084bbf63c84f935a93a8d43f513fd39374bcb78564c645994c5e00 WatchSource:0}: Error finding container 5ec1f2da8d084bbf63c84f935a93a8d43f513fd39374bcb78564c645994c5e00: Status 404 returned error can't find the container with id 5ec1f2da8d084bbf63c84f935a93a8d43f513fd39374bcb78564c645994c5e00 Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.269287 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:54:17 crc kubenswrapper[5017]: W0129 06:54:17.287411 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1646d23_d933_405f_a4d1_02f53682ae8f.slice/crio-7a390f7078ba26e4c5a797fff3fc5cd253b44abb14f020c3440ff4ee9ea33489 WatchSource:0}: Error finding container 7a390f7078ba26e4c5a797fff3fc5cd253b44abb14f020c3440ff4ee9ea33489: Status 404 returned error can't find the container with id 7a390f7078ba26e4c5a797fff3fc5cd253b44abb14f020c3440ff4ee9ea33489 Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.312182 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.440005 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-combined-ca-bundle\") pod \"68d01bb7-534e-47c7-854c-c96384ad8df4\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.440099 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-log-httpd\") pod \"68d01bb7-534e-47c7-854c-c96384ad8df4\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.440235 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfrn8\" (UniqueName: \"kubernetes.io/projected/68d01bb7-534e-47c7-854c-c96384ad8df4-kube-api-access-rfrn8\") pod \"68d01bb7-534e-47c7-854c-c96384ad8df4\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.440292 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-config-data\") pod \"68d01bb7-534e-47c7-854c-c96384ad8df4\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.440351 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-sg-core-conf-yaml\") pod \"68d01bb7-534e-47c7-854c-c96384ad8df4\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.440436 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-run-httpd\") pod \"68d01bb7-534e-47c7-854c-c96384ad8df4\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.440494 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-scripts\") pod \"68d01bb7-534e-47c7-854c-c96384ad8df4\" (UID: \"68d01bb7-534e-47c7-854c-c96384ad8df4\") " Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.442327 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "68d01bb7-534e-47c7-854c-c96384ad8df4" (UID: "68d01bb7-534e-47c7-854c-c96384ad8df4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.442383 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "68d01bb7-534e-47c7-854c-c96384ad8df4" (UID: "68d01bb7-534e-47c7-854c-c96384ad8df4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.447237 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d01bb7-534e-47c7-854c-c96384ad8df4-kube-api-access-rfrn8" (OuterVolumeSpecName: "kube-api-access-rfrn8") pod "68d01bb7-534e-47c7-854c-c96384ad8df4" (UID: "68d01bb7-534e-47c7-854c-c96384ad8df4"). InnerVolumeSpecName "kube-api-access-rfrn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.447409 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-scripts" (OuterVolumeSpecName: "scripts") pod "68d01bb7-534e-47c7-854c-c96384ad8df4" (UID: "68d01bb7-534e-47c7-854c-c96384ad8df4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.483643 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "68d01bb7-534e-47c7-854c-c96384ad8df4" (UID: "68d01bb7-534e-47c7-854c-c96384ad8df4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.542908 5017 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.542941 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.542951 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.542965 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/68d01bb7-534e-47c7-854c-c96384ad8df4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.542987 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfrn8\" (UniqueName: \"kubernetes.io/projected/68d01bb7-534e-47c7-854c-c96384ad8df4-kube-api-access-rfrn8\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.546359 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68d01bb7-534e-47c7-854c-c96384ad8df4" (UID: "68d01bb7-534e-47c7-854c-c96384ad8df4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.560736 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-config-data" (OuterVolumeSpecName: "config-data") pod "68d01bb7-534e-47c7-854c-c96384ad8df4" (UID: "68d01bb7-534e-47c7-854c-c96384ad8df4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.645431 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.645483 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d01bb7-534e-47c7-854c-c96384ad8df4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.876757 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.965510 5017 generic.go:334] "Generic (PLEG): container finished" podID="223272bf-db73-426c-ad7e-78093ad4316a" containerID="765e5c7055966bd5123ab29138201b56f855c7c00e3b585bfed023db37b82943" exitCode=0 Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.965827 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" event={"ID":"223272bf-db73-426c-ad7e-78093ad4316a","Type":"ContainerDied","Data":"765e5c7055966bd5123ab29138201b56f855c7c00e3b585bfed023db37b82943"} Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.966178 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" event={"ID":"223272bf-db73-426c-ad7e-78093ad4316a","Type":"ContainerStarted","Data":"5ec1f2da8d084bbf63c84f935a93a8d43f513fd39374bcb78564c645994c5e00"} Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.978246 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"68d01bb7-534e-47c7-854c-c96384ad8df4","Type":"ContainerDied","Data":"c016d1e0810fbefe9cd1515d777f4eaaf6962031d79318f13af8ca6b88098fdb"} Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.978336 5017 scope.go:117] "RemoveContainer" containerID="98e7d947d63316f6e727592605d37b6ca781155f08d8240ade6dd1626b1eddf7" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.978334 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:54:17 crc kubenswrapper[5017]: I0129 06:54:17.981739 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1646d23-d933-405f-a4d1-02f53682ae8f","Type":"ContainerStarted","Data":"7a390f7078ba26e4c5a797fff3fc5cd253b44abb14f020c3440ff4ee9ea33489"} Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.133665 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.234014 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74b97c48c4-nf2gp"] Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.234372 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74b97c48c4-nf2gp" podUID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerName="barbican-api-log" containerID="cri-o://e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211" gracePeriod=30 Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.234586 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-74b97c48c4-nf2gp" podUID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerName="barbican-api" containerID="cri-o://370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc" gracePeriod=30 Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.281748 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.315398 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.323865 5017 scope.go:117] "RemoveContainer" containerID="cc2b213cdad66ca060e35d230675b54206effa2d060e5c8b4fe4a82718618f8c" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.375687 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffc178e-6c5b-45ce-8810-2e897e83a0a9" path="/var/lib/kubelet/pods/0ffc178e-6c5b-45ce-8810-2e897e83a0a9/volumes" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.376534 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" path="/var/lib/kubelet/pods/68d01bb7-534e-47c7-854c-c96384ad8df4/volumes" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.377894 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:18 crc kubenswrapper[5017]: E0129 06:54:18.378308 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="sg-core" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378334 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="sg-core" Jan 29 06:54:18 crc kubenswrapper[5017]: E0129 06:54:18.378361 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="ceilometer-central-agent" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378369 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="ceilometer-central-agent" Jan 29 06:54:18 crc kubenswrapper[5017]: E0129 06:54:18.378383 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffc178e-6c5b-45ce-8810-2e897e83a0a9" containerName="dnsmasq-dns" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378389 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffc178e-6c5b-45ce-8810-2e897e83a0a9" containerName="dnsmasq-dns" Jan 29 06:54:18 crc kubenswrapper[5017]: E0129 06:54:18.378410 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="proxy-httpd" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378417 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="proxy-httpd" Jan 29 06:54:18 crc kubenswrapper[5017]: E0129 06:54:18.378427 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="ceilometer-notification-agent" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378434 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="ceilometer-notification-agent" Jan 29 06:54:18 crc kubenswrapper[5017]: E0129 06:54:18.378442 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffc178e-6c5b-45ce-8810-2e897e83a0a9" containerName="init" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378447 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffc178e-6c5b-45ce-8810-2e897e83a0a9" containerName="init" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378644 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="sg-core" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378676 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="ceilometer-central-agent" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378688 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="proxy-httpd" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378704 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffc178e-6c5b-45ce-8810-2e897e83a0a9" containerName="dnsmasq-dns" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.378715 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="68d01bb7-534e-47c7-854c-c96384ad8df4" containerName="ceilometer-notification-agent" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.395364 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.395500 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.405404 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.412614 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.426513 5017 scope.go:117] "RemoveContainer" containerID="2a7de7ba906eb493a55aae1e90816a05485d4884963524286fe87604184303b6" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.462070 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.462546 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-log-httpd\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.462622 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-run-httpd\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.462673 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-config-data\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.462749 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.462781 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxt7g\" (UniqueName: \"kubernetes.io/projected/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-kube-api-access-lxt7g\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.462809 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-scripts\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.518379 5017 scope.go:117] "RemoveContainer" containerID="3532343237aa5a78f92f88f321a808014a552e93f04b7c2ba2f4c217e74783c7" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.565201 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.565286 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-log-httpd\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.565374 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-run-httpd\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.565442 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-config-data\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.565526 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.565564 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxt7g\" (UniqueName: \"kubernetes.io/projected/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-kube-api-access-lxt7g\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.565600 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-scripts\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.566497 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-log-httpd\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.567493 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-run-httpd\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.571639 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-scripts\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.576917 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.578647 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-config-data\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.585447 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.599740 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxt7g\" (UniqueName: \"kubernetes.io/projected/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-kube-api-access-lxt7g\") pod \"ceilometer-0\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " pod="openstack/ceilometer-0" Jan 29 06:54:18 crc kubenswrapper[5017]: I0129 06:54:18.813897 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:54:19 crc kubenswrapper[5017]: I0129 06:54:19.047319 5017 generic.go:334] "Generic (PLEG): container finished" podID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerID="e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211" exitCode=143 Jan 29 06:54:19 crc kubenswrapper[5017]: I0129 06:54:19.047487 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b97c48c4-nf2gp" event={"ID":"e995b541-6d6d-4dbd-a7d6-e4b607becac7","Type":"ContainerDied","Data":"e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211"} Jan 29 06:54:19 crc kubenswrapper[5017]: I0129 06:54:19.072376 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b8470712-09f7-4366-a6fd-ab5dbc3c3192","Type":"ContainerStarted","Data":"442e5abb015fed4f8c4c9b13038640d3c19653d0278b5712bce3dd46a40c4b6d"} Jan 29 06:54:19 crc kubenswrapper[5017]: I0129 06:54:19.077879 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:54:19 crc kubenswrapper[5017]: I0129 06:54:19.081248 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1646d23-d933-405f-a4d1-02f53682ae8f","Type":"ContainerStarted","Data":"b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa"} Jan 29 06:54:19 crc kubenswrapper[5017]: I0129 06:54:19.090086 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" event={"ID":"223272bf-db73-426c-ad7e-78093ad4316a","Type":"ContainerStarted","Data":"b1e4243c0e5113057f81a19968de08adad0ab0c168be64b85e394c209e09192c"} Jan 29 06:54:19 crc kubenswrapper[5017]: I0129 06:54:19.090365 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:19 crc kubenswrapper[5017]: I0129 06:54:19.147239 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" podStartSLOduration=4.147210266 podStartE2EDuration="4.147210266s" podCreationTimestamp="2026-01-29 06:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:19.131659071 +0000 UTC m=+1145.506106671" watchObservedRunningTime="2026-01-29 06:54:19.147210266 +0000 UTC m=+1145.521657886" Jan 29 06:54:19 crc kubenswrapper[5017]: I0129 06:54:19.532707 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.104205 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerStarted","Data":"bddfc11dccc706d7fdb30e3acee6e1d2b0305d795b14d4c50780355fa616ae11"} Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.107059 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b8470712-09f7-4366-a6fd-ab5dbc3c3192","Type":"ContainerStarted","Data":"6cca86191df9f6f4e268e89b2a32499fc588d1bcd097392c370bdb142ec5eb90"} Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.110268 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1646d23-d933-405f-a4d1-02f53682ae8f","Type":"ContainerStarted","Data":"47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250"} Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.110519 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerName="cinder-api-log" containerID="cri-o://b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa" gracePeriod=30 Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.110603 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerName="cinder-api" containerID="cri-o://47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250" gracePeriod=30 Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.133479 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.276192212 podStartE2EDuration="5.133417276s" podCreationTimestamp="2026-01-29 06:54:15 +0000 UTC" firstStartedPulling="2026-01-29 06:54:16.592494493 +0000 UTC m=+1142.966942103" lastFinishedPulling="2026-01-29 06:54:17.449719557 +0000 UTC m=+1143.824167167" observedRunningTime="2026-01-29 06:54:20.128580086 +0000 UTC m=+1146.503027696" watchObservedRunningTime="2026-01-29 06:54:20.133417276 +0000 UTC m=+1146.507864886" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.160743 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.160713339 podStartE2EDuration="5.160713339s" podCreationTimestamp="2026-01-29 06:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:20.149795061 +0000 UTC m=+1146.524242671" watchObservedRunningTime="2026-01-29 06:54:20.160713339 +0000 UTC m=+1146.535160949" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.795809 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.825311 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.825783 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh2sv\" (UniqueName: \"kubernetes.io/projected/b1646d23-d933-405f-a4d1-02f53682ae8f-kube-api-access-kh2sv\") pod \"b1646d23-d933-405f-a4d1-02f53682ae8f\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.825937 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data-custom\") pod \"b1646d23-d933-405f-a4d1-02f53682ae8f\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.825998 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-combined-ca-bundle\") pod \"b1646d23-d933-405f-a4d1-02f53682ae8f\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.826052 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-scripts\") pod \"b1646d23-d933-405f-a4d1-02f53682ae8f\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.826169 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1646d23-d933-405f-a4d1-02f53682ae8f-etc-machine-id\") pod \"b1646d23-d933-405f-a4d1-02f53682ae8f\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.826273 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1646d23-d933-405f-a4d1-02f53682ae8f-logs\") pod \"b1646d23-d933-405f-a4d1-02f53682ae8f\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.826366 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data\") pod \"b1646d23-d933-405f-a4d1-02f53682ae8f\" (UID: \"b1646d23-d933-405f-a4d1-02f53682ae8f\") " Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.832034 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1646d23-d933-405f-a4d1-02f53682ae8f-kube-api-access-kh2sv" (OuterVolumeSpecName: "kube-api-access-kh2sv") pod "b1646d23-d933-405f-a4d1-02f53682ae8f" (UID: "b1646d23-d933-405f-a4d1-02f53682ae8f"). InnerVolumeSpecName "kube-api-access-kh2sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.834394 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b1646d23-d933-405f-a4d1-02f53682ae8f" (UID: "b1646d23-d933-405f-a4d1-02f53682ae8f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.834466 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1646d23-d933-405f-a4d1-02f53682ae8f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b1646d23-d933-405f-a4d1-02f53682ae8f" (UID: "b1646d23-d933-405f-a4d1-02f53682ae8f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.834722 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1646d23-d933-405f-a4d1-02f53682ae8f-logs" (OuterVolumeSpecName: "logs") pod "b1646d23-d933-405f-a4d1-02f53682ae8f" (UID: "b1646d23-d933-405f-a4d1-02f53682ae8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.845781 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-scripts" (OuterVolumeSpecName: "scripts") pod "b1646d23-d933-405f-a4d1-02f53682ae8f" (UID: "b1646d23-d933-405f-a4d1-02f53682ae8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.890014 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1646d23-d933-405f-a4d1-02f53682ae8f" (UID: "b1646d23-d933-405f-a4d1-02f53682ae8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.928469 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1646d23-d933-405f-a4d1-02f53682ae8f-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.928509 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh2sv\" (UniqueName: \"kubernetes.io/projected/b1646d23-d933-405f-a4d1-02f53682ae8f-kube-api-access-kh2sv\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.928521 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.928532 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.928543 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.928552 5017 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1646d23-d933-405f-a4d1-02f53682ae8f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:20 crc kubenswrapper[5017]: I0129 06:54:20.947055 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data" (OuterVolumeSpecName: "config-data") pod "b1646d23-d933-405f-a4d1-02f53682ae8f" (UID: "b1646d23-d933-405f-a4d1-02f53682ae8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.030769 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1646d23-d933-405f-a4d1-02f53682ae8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.120796 5017 generic.go:334] "Generic (PLEG): container finished" podID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerID="47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250" exitCode=0 Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.120838 5017 generic.go:334] "Generic (PLEG): container finished" podID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerID="b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa" exitCode=143 Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.120877 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.120865 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1646d23-d933-405f-a4d1-02f53682ae8f","Type":"ContainerDied","Data":"47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250"} Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.121097 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1646d23-d933-405f-a4d1-02f53682ae8f","Type":"ContainerDied","Data":"b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa"} Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.121134 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1646d23-d933-405f-a4d1-02f53682ae8f","Type":"ContainerDied","Data":"7a390f7078ba26e4c5a797fff3fc5cd253b44abb14f020c3440ff4ee9ea33489"} Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.121163 5017 scope.go:117] "RemoveContainer" containerID="47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.124406 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerStarted","Data":"ece2265dfef822da54e8aac0ea5ceaab11b0c43aac0c371eb513b2a751baca21"} Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.124445 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerStarted","Data":"11f8f45d009b860ad648464664ec41c1022a0eb39a7e7977eae5581b5bd4b5c1"} Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.145478 5017 scope.go:117] "RemoveContainer" containerID="b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.157922 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.171445 5017 scope.go:117] "RemoveContainer" containerID="47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.171971 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:54:21 crc kubenswrapper[5017]: E0129 06:54:21.175079 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250\": container with ID starting with 47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250 not found: ID does not exist" containerID="47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.175120 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250"} err="failed to get container status \"47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250\": rpc error: code = NotFound desc = could not find container \"47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250\": container with ID starting with 47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250 not found: ID does not exist" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.175156 5017 scope.go:117] "RemoveContainer" containerID="b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa" Jan 29 06:54:21 crc kubenswrapper[5017]: E0129 06:54:21.175717 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa\": container with ID starting with b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa not found: ID does not exist" containerID="b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.175771 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa"} err="failed to get container status \"b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa\": rpc error: code = NotFound desc = could not find container \"b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa\": container with ID starting with b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa not found: ID does not exist" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.175802 5017 scope.go:117] "RemoveContainer" containerID="47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.176468 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250"} err="failed to get container status \"47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250\": rpc error: code = NotFound desc = could not find container \"47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250\": container with ID starting with 47bc0716fa4fc67fb2503a178c3198aa5b8f35b0669ebad226a6d9a34e9eb250 not found: ID does not exist" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.176552 5017 scope.go:117] "RemoveContainer" containerID="b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.176926 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa"} err="failed to get container status \"b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa\": rpc error: code = NotFound desc = could not find container \"b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa\": container with ID starting with b27ac45c4d82a6936dcf86653eb9218706084b8e20aff2ea61c0bb24c4f2dffa not found: ID does not exist" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.195752 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:54:21 crc kubenswrapper[5017]: E0129 06:54:21.196632 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerName="cinder-api" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.196740 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerName="cinder-api" Jan 29 06:54:21 crc kubenswrapper[5017]: E0129 06:54:21.196816 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerName="cinder-api-log" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.196880 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerName="cinder-api-log" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.197192 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerName="cinder-api-log" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.197303 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1646d23-d933-405f-a4d1-02f53682ae8f" containerName="cinder-api" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.198828 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.205647 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.205695 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.207070 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.224482 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.239363 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71f6aede-754b-476f-8082-78f0e50b6a39-etc-machine-id\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.239421 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.239460 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data-custom\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.239488 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.239574 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-public-tls-certs\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.239599 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbzbp\" (UniqueName: \"kubernetes.io/projected/71f6aede-754b-476f-8082-78f0e50b6a39-kube-api-access-mbzbp\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.239622 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.239672 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71f6aede-754b-476f-8082-78f0e50b6a39-logs\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.239687 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-scripts\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.342928 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-public-tls-certs\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.343634 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbzbp\" (UniqueName: \"kubernetes.io/projected/71f6aede-754b-476f-8082-78f0e50b6a39-kube-api-access-mbzbp\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.343794 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.343928 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71f6aede-754b-476f-8082-78f0e50b6a39-logs\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.344059 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-scripts\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.344257 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71f6aede-754b-476f-8082-78f0e50b6a39-etc-machine-id\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.344441 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.344567 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data-custom\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.344681 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.346898 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71f6aede-754b-476f-8082-78f0e50b6a39-etc-machine-id\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.347203 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71f6aede-754b-476f-8082-78f0e50b6a39-logs\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.352056 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-public-tls-certs\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.354112 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.354667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-scripts\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.354893 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.361831 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.363344 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbzbp\" (UniqueName: \"kubernetes.io/projected/71f6aede-754b-476f-8082-78f0e50b6a39-kube-api-access-mbzbp\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.365743 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data-custom\") pod \"cinder-api-0\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.518350 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.857239 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.959713 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data\") pod \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.959778 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqpdc\" (UniqueName: \"kubernetes.io/projected/e995b541-6d6d-4dbd-a7d6-e4b607becac7-kube-api-access-sqpdc\") pod \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.959931 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e995b541-6d6d-4dbd-a7d6-e4b607becac7-logs\") pod \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.960097 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-combined-ca-bundle\") pod \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.960143 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data-custom\") pod \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\" (UID: \"e995b541-6d6d-4dbd-a7d6-e4b607becac7\") " Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.962456 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e995b541-6d6d-4dbd-a7d6-e4b607becac7-logs" (OuterVolumeSpecName: "logs") pod "e995b541-6d6d-4dbd-a7d6-e4b607becac7" (UID: "e995b541-6d6d-4dbd-a7d6-e4b607becac7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.968542 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e995b541-6d6d-4dbd-a7d6-e4b607becac7-kube-api-access-sqpdc" (OuterVolumeSpecName: "kube-api-access-sqpdc") pod "e995b541-6d6d-4dbd-a7d6-e4b607becac7" (UID: "e995b541-6d6d-4dbd-a7d6-e4b607becac7"). InnerVolumeSpecName "kube-api-access-sqpdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:21 crc kubenswrapper[5017]: I0129 06:54:21.968553 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e995b541-6d6d-4dbd-a7d6-e4b607becac7" (UID: "e995b541-6d6d-4dbd-a7d6-e4b607becac7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.005495 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e995b541-6d6d-4dbd-a7d6-e4b607becac7" (UID: "e995b541-6d6d-4dbd-a7d6-e4b607becac7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.021518 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data" (OuterVolumeSpecName: "config-data") pod "e995b541-6d6d-4dbd-a7d6-e4b607becac7" (UID: "e995b541-6d6d-4dbd-a7d6-e4b607becac7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.063690 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.063733 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.063744 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e995b541-6d6d-4dbd-a7d6-e4b607becac7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.063756 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqpdc\" (UniqueName: \"kubernetes.io/projected/e995b541-6d6d-4dbd-a7d6-e4b607becac7-kube-api-access-sqpdc\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.063769 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e995b541-6d6d-4dbd-a7d6-e4b607becac7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:22 crc kubenswrapper[5017]: W0129 06:54:22.081446 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71f6aede_754b_476f_8082_78f0e50b6a39.slice/crio-7266fb2e3fc5ecae384df96f6ce26886429801b6b37338710a04f93a864dbe88 WatchSource:0}: Error finding container 7266fb2e3fc5ecae384df96f6ce26886429801b6b37338710a04f93a864dbe88: Status 404 returned error can't find the container with id 7266fb2e3fc5ecae384df96f6ce26886429801b6b37338710a04f93a864dbe88 Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.087541 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.139992 5017 generic.go:334] "Generic (PLEG): container finished" podID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerID="370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc" exitCode=0 Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.140114 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74b97c48c4-nf2gp" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.140138 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b97c48c4-nf2gp" event={"ID":"e995b541-6d6d-4dbd-a7d6-e4b607becac7","Type":"ContainerDied","Data":"370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc"} Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.140208 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74b97c48c4-nf2gp" event={"ID":"e995b541-6d6d-4dbd-a7d6-e4b607becac7","Type":"ContainerDied","Data":"7ff78da29f8556174fc3edb0e1a1508f3243aa4ef1c542e1cf801a49bd99f426"} Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.140228 5017 scope.go:117] "RemoveContainer" containerID="370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.148777 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71f6aede-754b-476f-8082-78f0e50b6a39","Type":"ContainerStarted","Data":"7266fb2e3fc5ecae384df96f6ce26886429801b6b37338710a04f93a864dbe88"} Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.151957 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerStarted","Data":"fd92981114f789e39e1e2ee3c22ebcf65a92291722e32a006127093b113f7f96"} Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.243356 5017 scope.go:117] "RemoveContainer" containerID="e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.248728 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-74b97c48c4-nf2gp"] Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.257995 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-74b97c48c4-nf2gp"] Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.264901 5017 scope.go:117] "RemoveContainer" containerID="370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc" Jan 29 06:54:22 crc kubenswrapper[5017]: E0129 06:54:22.265521 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc\": container with ID starting with 370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc not found: ID does not exist" containerID="370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.265575 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc"} err="failed to get container status \"370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc\": rpc error: code = NotFound desc = could not find container \"370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc\": container with ID starting with 370c19a18f0b3853d88d547699e1b86265d2e6c07395d944566841655e9949cc not found: ID does not exist" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.265607 5017 scope.go:117] "RemoveContainer" containerID="e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211" Jan 29 06:54:22 crc kubenswrapper[5017]: E0129 06:54:22.266111 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211\": container with ID starting with e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211 not found: ID does not exist" containerID="e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.266158 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211"} err="failed to get container status \"e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211\": rpc error: code = NotFound desc = could not find container \"e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211\": container with ID starting with e63bdb1df978a7c6a9b3c40cff3f9a2c1b80fcb601dd962445d25948bf8be211 not found: ID does not exist" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.346385 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1646d23-d933-405f-a4d1-02f53682ae8f" path="/var/lib/kubelet/pods/b1646d23-d933-405f-a4d1-02f53682ae8f/volumes" Jan 29 06:54:22 crc kubenswrapper[5017]: I0129 06:54:22.347354 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" path="/var/lib/kubelet/pods/e995b541-6d6d-4dbd-a7d6-e4b607becac7/volumes" Jan 29 06:54:23 crc kubenswrapper[5017]: I0129 06:54:23.182141 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71f6aede-754b-476f-8082-78f0e50b6a39","Type":"ContainerStarted","Data":"9244b1fe8ffaffe1ec210b4f9fe46f1fb5d4f2443ca7e7e703e6ca10fd8766d0"} Jan 29 06:54:24 crc kubenswrapper[5017]: I0129 06:54:24.197845 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71f6aede-754b-476f-8082-78f0e50b6a39","Type":"ContainerStarted","Data":"1b90244a79764c9e3b9a5b69c14c39546fc533d032150453b20e901a8805b3fe"} Jan 29 06:54:24 crc kubenswrapper[5017]: I0129 06:54:24.198667 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 06:54:24 crc kubenswrapper[5017]: I0129 06:54:24.202547 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerStarted","Data":"6ae36bd319619d1ed798645f600ff4653b390a5baecd6fe8b754440823362f33"} Jan 29 06:54:24 crc kubenswrapper[5017]: I0129 06:54:24.202831 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 06:54:24 crc kubenswrapper[5017]: I0129 06:54:24.232674 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.232645779 podStartE2EDuration="3.232645779s" podCreationTimestamp="2026-01-29 06:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:24.222818687 +0000 UTC m=+1150.597266297" watchObservedRunningTime="2026-01-29 06:54:24.232645779 +0000 UTC m=+1150.607093389" Jan 29 06:54:24 crc kubenswrapper[5017]: I0129 06:54:24.335825 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.716256712 podStartE2EDuration="6.335795676s" podCreationTimestamp="2026-01-29 06:54:18 +0000 UTC" firstStartedPulling="2026-01-29 06:54:19.578055408 +0000 UTC m=+1145.952503018" lastFinishedPulling="2026-01-29 06:54:23.197594372 +0000 UTC m=+1149.572041982" observedRunningTime="2026-01-29 06:54:24.32012295 +0000 UTC m=+1150.694570560" watchObservedRunningTime="2026-01-29 06:54:24.335795676 +0000 UTC m=+1150.710243286" Jan 29 06:54:25 crc kubenswrapper[5017]: I0129 06:54:25.602490 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7b944f8dd4-2rk49" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.075557 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.156392 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.206237 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.222281 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerName="cinder-scheduler" containerID="cri-o://442e5abb015fed4f8c4c9b13038640d3c19653d0278b5712bce3dd46a40c4b6d" gracePeriod=30 Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.222370 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerName="probe" containerID="cri-o://6cca86191df9f6f4e268e89b2a32499fc588d1bcd097392c370bdb142ec5eb90" gracePeriod=30 Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.296024 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-tgzgj"] Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.296336 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" podUID="3dfc8258-7043-4cfd-ac09-d9e481f12e9d" containerName="dnsmasq-dns" containerID="cri-o://8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2" gracePeriod=10 Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.539455 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.539527 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.539578 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.540659 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70dfeea0251012308950e213d0ab72466a324bea818357cd6a2957c1747ca4d2"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.540719 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://70dfeea0251012308950e213d0ab72466a324bea818357cd6a2957c1747ca4d2" gracePeriod=600 Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.886892 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.899686 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.975129 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-sb\") pod \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.975227 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-config\") pod \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.975278 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-nb\") pod \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.975346 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpzrn\" (UniqueName: \"kubernetes.io/projected/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-kube-api-access-jpzrn\") pod \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.975393 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-swift-storage-0\") pod \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.975774 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-svc\") pod \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\" (UID: \"3dfc8258-7043-4cfd-ac09-d9e481f12e9d\") " Jan 29 06:54:26 crc kubenswrapper[5017]: I0129 06:54:26.992657 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-kube-api-access-jpzrn" (OuterVolumeSpecName: "kube-api-access-jpzrn") pod "3dfc8258-7043-4cfd-ac09-d9e481f12e9d" (UID: "3dfc8258-7043-4cfd-ac09-d9e481f12e9d"). InnerVolumeSpecName "kube-api-access-jpzrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.061227 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.080255 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpzrn\" (UniqueName: \"kubernetes.io/projected/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-kube-api-access-jpzrn\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.159196 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3dfc8258-7043-4cfd-ac09-d9e481f12e9d" (UID: "3dfc8258-7043-4cfd-ac09-d9e481f12e9d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.162698 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3dfc8258-7043-4cfd-ac09-d9e481f12e9d" (UID: "3dfc8258-7043-4cfd-ac09-d9e481f12e9d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.182709 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.182953 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.190343 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-config" (OuterVolumeSpecName: "config") pod "3dfc8258-7043-4cfd-ac09-d9e481f12e9d" (UID: "3dfc8258-7043-4cfd-ac09-d9e481f12e9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.201508 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3dfc8258-7043-4cfd-ac09-d9e481f12e9d" (UID: "3dfc8258-7043-4cfd-ac09-d9e481f12e9d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.211464 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3dfc8258-7043-4cfd-ac09-d9e481f12e9d" (UID: "3dfc8258-7043-4cfd-ac09-d9e481f12e9d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.270645 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="70dfeea0251012308950e213d0ab72466a324bea818357cd6a2957c1747ca4d2" exitCode=0 Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.270724 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"70dfeea0251012308950e213d0ab72466a324bea818357cd6a2957c1747ca4d2"} Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.270758 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"f0fa6e2a79db70c1d184fc860ee5a35d194bde9485b97eabb60341d87278b250"} Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.270776 5017 scope.go:117] "RemoveContainer" containerID="8f7da3626486d0c22b65bfd4936f285f08c55d6461ba11e5bddb44e28f11086f" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.286830 5017 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.286887 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.286900 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dfc8258-7043-4cfd-ac09-d9e481f12e9d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.292032 5017 generic.go:334] "Generic (PLEG): container finished" podID="3dfc8258-7043-4cfd-ac09-d9e481f12e9d" containerID="8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2" exitCode=0 Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.292128 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" event={"ID":"3dfc8258-7043-4cfd-ac09-d9e481f12e9d","Type":"ContainerDied","Data":"8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2"} Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.292150 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.292189 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-tgzgj" event={"ID":"3dfc8258-7043-4cfd-ac09-d9e481f12e9d","Type":"ContainerDied","Data":"8e6098ea02dca90bb48f50fc8fd64698beb47dd1ce451e017e358aec3da250e7"} Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.335542 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6d6c4c6dc8-5jbvh"] Jan 29 06:54:27 crc kubenswrapper[5017]: E0129 06:54:27.340865 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfc8258-7043-4cfd-ac09-d9e481f12e9d" containerName="dnsmasq-dns" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.340906 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfc8258-7043-4cfd-ac09-d9e481f12e9d" containerName="dnsmasq-dns" Jan 29 06:54:27 crc kubenswrapper[5017]: E0129 06:54:27.340927 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerName="barbican-api" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.340934 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerName="barbican-api" Jan 29 06:54:27 crc kubenswrapper[5017]: E0129 06:54:27.340996 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfc8258-7043-4cfd-ac09-d9e481f12e9d" containerName="init" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.341007 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfc8258-7043-4cfd-ac09-d9e481f12e9d" containerName="init" Jan 29 06:54:27 crc kubenswrapper[5017]: E0129 06:54:27.341038 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerName="barbican-api-log" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.341045 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerName="barbican-api-log" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.341732 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfc8258-7043-4cfd-ac09-d9e481f12e9d" containerName="dnsmasq-dns" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.341772 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerName="barbican-api" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.341795 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e995b541-6d6d-4dbd-a7d6-e4b607becac7" containerName="barbican-api-log" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.347269 5017 scope.go:117] "RemoveContainer" containerID="8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.348923 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.413697 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d6c4c6dc8-5jbvh"] Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.443193 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-tgzgj"] Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.448783 5017 scope.go:117] "RemoveContainer" containerID="2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.453736 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-tgzgj"] Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.522557 5017 scope.go:117] "RemoveContainer" containerID="8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2" Jan 29 06:54:27 crc kubenswrapper[5017]: E0129 06:54:27.532516 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2\": container with ID starting with 8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2 not found: ID does not exist" containerID="8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.532592 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2"} err="failed to get container status \"8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2\": rpc error: code = NotFound desc = could not find container \"8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2\": container with ID starting with 8409cac67da09953edb1b7c242528a3e53c74e0361aa1204be66d67e4ac620c2 not found: ID does not exist" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.532628 5017 scope.go:117] "RemoveContainer" containerID="2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d" Jan 29 06:54:27 crc kubenswrapper[5017]: E0129 06:54:27.533480 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d\": container with ID starting with 2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d not found: ID does not exist" containerID="2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.533660 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d"} err="failed to get container status \"2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d\": rpc error: code = NotFound desc = could not find container \"2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d\": container with ID starting with 2845b77e0f3271d0d2a2567d78b6fb737b425004ac8039b7a726a4ed76dce87d not found: ID does not exist" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.579814 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-logs\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.580572 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-combined-ca-bundle\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.580745 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-internal-tls-certs\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.580797 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-scripts\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.581069 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhkzg\" (UniqueName: \"kubernetes.io/projected/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-kube-api-access-vhkzg\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.581135 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-public-tls-certs\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.581326 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-config-data\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.683395 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-config-data\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.683778 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-logs\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.683817 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-combined-ca-bundle\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.683918 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-internal-tls-certs\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.684722 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-scripts\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.684776 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhkzg\" (UniqueName: \"kubernetes.io/projected/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-kube-api-access-vhkzg\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.684818 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-public-tls-certs\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.685359 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-logs\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.698885 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-internal-tls-certs\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.699216 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-public-tls-certs\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.699560 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-config-data\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.699821 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-scripts\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.704413 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-combined-ca-bundle\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.704836 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhkzg\" (UniqueName: \"kubernetes.io/projected/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-kube-api-access-vhkzg\") pod \"placement-6d6c4c6dc8-5jbvh\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:27 crc kubenswrapper[5017]: I0129 06:54:27.983496 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:28 crc kubenswrapper[5017]: I0129 06:54:28.078821 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:54:28 crc kubenswrapper[5017]: I0129 06:54:28.318371 5017 generic.go:334] "Generic (PLEG): container finished" podID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerID="6cca86191df9f6f4e268e89b2a32499fc588d1bcd097392c370bdb142ec5eb90" exitCode=0 Jan 29 06:54:28 crc kubenswrapper[5017]: I0129 06:54:28.332813 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfc8258-7043-4cfd-ac09-d9e481f12e9d" path="/var/lib/kubelet/pods/3dfc8258-7043-4cfd-ac09-d9e481f12e9d/volumes" Jan 29 06:54:28 crc kubenswrapper[5017]: I0129 06:54:28.333679 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b8470712-09f7-4366-a6fd-ab5dbc3c3192","Type":"ContainerDied","Data":"6cca86191df9f6f4e268e89b2a32499fc588d1bcd097392c370bdb142ec5eb90"} Jan 29 06:54:28 crc kubenswrapper[5017]: I0129 06:54:28.485814 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d6c4c6dc8-5jbvh"] Jan 29 06:54:28 crc kubenswrapper[5017]: I0129 06:54:28.771948 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:54:28 crc kubenswrapper[5017]: I0129 06:54:28.868863 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-849cfbbc5-ctfjf"] Jan 29 06:54:28 crc kubenswrapper[5017]: I0129 06:54:28.869202 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-849cfbbc5-ctfjf" podUID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerName="neutron-api" containerID="cri-o://6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024" gracePeriod=30 Jan 29 06:54:28 crc kubenswrapper[5017]: I0129 06:54:28.869874 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-849cfbbc5-ctfjf" podUID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerName="neutron-httpd" containerID="cri-o://e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812" gracePeriod=30 Jan 29 06:54:29 crc kubenswrapper[5017]: I0129 06:54:29.336941 5017 generic.go:334] "Generic (PLEG): container finished" podID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerID="e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812" exitCode=0 Jan 29 06:54:29 crc kubenswrapper[5017]: I0129 06:54:29.337308 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849cfbbc5-ctfjf" event={"ID":"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e","Type":"ContainerDied","Data":"e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812"} Jan 29 06:54:29 crc kubenswrapper[5017]: I0129 06:54:29.341618 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c4c6dc8-5jbvh" event={"ID":"dc01ff67-baeb-47d1-90f5-9cff65c9dffa","Type":"ContainerStarted","Data":"443395d71d852c3ec070ffebcf4c6e95bc2745cdc77bf998d3b62968c01056ef"} Jan 29 06:54:29 crc kubenswrapper[5017]: I0129 06:54:29.341683 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c4c6dc8-5jbvh" event={"ID":"dc01ff67-baeb-47d1-90f5-9cff65c9dffa","Type":"ContainerStarted","Data":"2eee62f312708ba7438eddc1dabc0de687bb3ae24d41ed266160597a9d245df9"} Jan 29 06:54:29 crc kubenswrapper[5017]: I0129 06:54:29.341704 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c4c6dc8-5jbvh" event={"ID":"dc01ff67-baeb-47d1-90f5-9cff65c9dffa","Type":"ContainerStarted","Data":"a9796bd011e07eae89e477a2e8422d623a06c70465616de74892fac21cb060b7"} Jan 29 06:54:29 crc kubenswrapper[5017]: I0129 06:54:29.341835 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:29 crc kubenswrapper[5017]: I0129 06:54:29.365312 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6d6c4c6dc8-5jbvh" podStartSLOduration=2.365283147 podStartE2EDuration="2.365283147s" podCreationTimestamp="2026-01-29 06:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:29.362088109 +0000 UTC m=+1155.736535719" watchObservedRunningTime="2026-01-29 06:54:29.365283147 +0000 UTC m=+1155.739730757" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.353226 5017 generic.go:334] "Generic (PLEG): container finished" podID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerID="442e5abb015fed4f8c4c9b13038640d3c19653d0278b5712bce3dd46a40c4b6d" exitCode=0 Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.353315 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b8470712-09f7-4366-a6fd-ab5dbc3c3192","Type":"ContainerDied","Data":"442e5abb015fed4f8c4c9b13038640d3c19653d0278b5712bce3dd46a40c4b6d"} Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.353828 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b8470712-09f7-4366-a6fd-ab5dbc3c3192","Type":"ContainerDied","Data":"fbb069925f9d0b9a0f77ce849be2a9b24c9d3abfb2b2a1e5fd8e3f23d16d6569"} Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.353848 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbb069925f9d0b9a0f77ce849be2a9b24c9d3abfb2b2a1e5fd8e3f23d16d6569" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.354188 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.368900 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.393063 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.561000 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-scripts\") pod \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.561117 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8470712-09f7-4366-a6fd-ab5dbc3c3192-etc-machine-id\") pod \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.561274 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data-custom\") pod \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.561304 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data\") pod \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.561353 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-combined-ca-bundle\") pod \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.561399 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbk2g\" (UniqueName: \"kubernetes.io/projected/b8470712-09f7-4366-a6fd-ab5dbc3c3192-kube-api-access-mbk2g\") pod \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\" (UID: \"b8470712-09f7-4366-a6fd-ab5dbc3c3192\") " Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.562949 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8470712-09f7-4366-a6fd-ab5dbc3c3192-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b8470712-09f7-4366-a6fd-ab5dbc3c3192" (UID: "b8470712-09f7-4366-a6fd-ab5dbc3c3192"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.585824 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b8470712-09f7-4366-a6fd-ab5dbc3c3192" (UID: "b8470712-09f7-4366-a6fd-ab5dbc3c3192"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.593088 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-scripts" (OuterVolumeSpecName: "scripts") pod "b8470712-09f7-4366-a6fd-ab5dbc3c3192" (UID: "b8470712-09f7-4366-a6fd-ab5dbc3c3192"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.601296 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8470712-09f7-4366-a6fd-ab5dbc3c3192-kube-api-access-mbk2g" (OuterVolumeSpecName: "kube-api-access-mbk2g") pod "b8470712-09f7-4366-a6fd-ab5dbc3c3192" (UID: "b8470712-09f7-4366-a6fd-ab5dbc3c3192"). InnerVolumeSpecName "kube-api-access-mbk2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.666588 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.666626 5017 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8470712-09f7-4366-a6fd-ab5dbc3c3192-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.666636 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.666653 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbk2g\" (UniqueName: \"kubernetes.io/projected/b8470712-09f7-4366-a6fd-ab5dbc3c3192-kube-api-access-mbk2g\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.687762 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8470712-09f7-4366-a6fd-ab5dbc3c3192" (UID: "b8470712-09f7-4366-a6fd-ab5dbc3c3192"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.762160 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data" (OuterVolumeSpecName: "config-data") pod "b8470712-09f7-4366-a6fd-ab5dbc3c3192" (UID: "b8470712-09f7-4366-a6fd-ab5dbc3c3192"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.770136 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:30 crc kubenswrapper[5017]: I0129 06:54:30.770176 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8470712-09f7-4366-a6fd-ab5dbc3c3192-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.362968 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.397810 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.408244 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.427115 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:54:31 crc kubenswrapper[5017]: E0129 06:54:31.427601 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerName="cinder-scheduler" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.427630 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerName="cinder-scheduler" Jan 29 06:54:31 crc kubenswrapper[5017]: E0129 06:54:31.427650 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerName="probe" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.427658 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerName="probe" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.427846 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerName="probe" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.427867 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" containerName="cinder-scheduler" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.428948 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.432270 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.446729 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.587070 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-scripts\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.587522 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxz9k\" (UniqueName: \"kubernetes.io/projected/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-kube-api-access-dxz9k\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.587900 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.588343 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.588428 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.588702 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.690572 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-scripts\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.691045 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxz9k\" (UniqueName: \"kubernetes.io/projected/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-kube-api-access-dxz9k\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.691097 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.691121 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.691137 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.691192 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.692252 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.698729 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-scripts\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.699673 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.708512 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.709799 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.713609 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxz9k\" (UniqueName: \"kubernetes.io/projected/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-kube-api-access-dxz9k\") pod \"cinder-scheduler-0\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " pod="openstack/cinder-scheduler-0" Jan 29 06:54:31 crc kubenswrapper[5017]: I0129 06:54:31.746911 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.301115 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:54:32 crc kubenswrapper[5017]: W0129 06:54:32.316130 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c69fc6f_43e9_4fe5_b964_8db89e6ab354.slice/crio-09bc6537d1f0bc72902e43e56274c6427b49d249e85072b33a90c4d85051d987 WatchSource:0}: Error finding container 09bc6537d1f0bc72902e43e56274c6427b49d249e85072b33a90c4d85051d987: Status 404 returned error can't find the container with id 09bc6537d1f0bc72902e43e56274c6427b49d249e85072b33a90c4d85051d987 Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.334169 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8470712-09f7-4366-a6fd-ab5dbc3c3192" path="/var/lib/kubelet/pods/b8470712-09f7-4366-a6fd-ab5dbc3c3192/volumes" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.362501 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.375840 5017 generic.go:334] "Generic (PLEG): container finished" podID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerID="6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024" exitCode=0 Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.375899 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849cfbbc5-ctfjf" event={"ID":"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e","Type":"ContainerDied","Data":"6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024"} Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.375917 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849cfbbc5-ctfjf" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.375926 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849cfbbc5-ctfjf" event={"ID":"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e","Type":"ContainerDied","Data":"362770a655341470885508ea66bdb58d7e182ee582f950977cdc04ef72c5ae00"} Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.375946 5017 scope.go:117] "RemoveContainer" containerID="e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.389224 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9c69fc6f-43e9-4fe5-b964-8db89e6ab354","Type":"ContainerStarted","Data":"09bc6537d1f0bc72902e43e56274c6427b49d249e85072b33a90c4d85051d987"} Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.428310 5017 scope.go:117] "RemoveContainer" containerID="6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.469489 5017 scope.go:117] "RemoveContainer" containerID="e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812" Jan 29 06:54:32 crc kubenswrapper[5017]: E0129 06:54:32.476096 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812\": container with ID starting with e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812 not found: ID does not exist" containerID="e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.476173 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812"} err="failed to get container status \"e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812\": rpc error: code = NotFound desc = could not find container \"e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812\": container with ID starting with e657144fec69792e667a2f20cc070095a25ad2e62af34f5b2d8af26891296812 not found: ID does not exist" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.476224 5017 scope.go:117] "RemoveContainer" containerID="6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024" Jan 29 06:54:32 crc kubenswrapper[5017]: E0129 06:54:32.477016 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024\": container with ID starting with 6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024 not found: ID does not exist" containerID="6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.477103 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024"} err="failed to get container status \"6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024\": rpc error: code = NotFound desc = could not find container \"6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024\": container with ID starting with 6bab2ff025e0387bec716880abfa7594c3ecb13caf41b8149ca789c498be9024 not found: ID does not exist" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.530027 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-combined-ca-bundle\") pod \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.530072 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-internal-tls-certs\") pod \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.530212 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-config\") pod \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.530263 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfp9\" (UniqueName: \"kubernetes.io/projected/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-kube-api-access-hqfp9\") pod \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.530357 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-ovndb-tls-certs\") pod \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.530399 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-httpd-config\") pod \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.530469 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-public-tls-certs\") pod \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\" (UID: \"c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e\") " Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.538544 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-kube-api-access-hqfp9" (OuterVolumeSpecName: "kube-api-access-hqfp9") pod "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" (UID: "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e"). InnerVolumeSpecName "kube-api-access-hqfp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.541263 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" (UID: "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.602699 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-config" (OuterVolumeSpecName: "config") pod "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" (UID: "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.611349 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" (UID: "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.614480 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" (UID: "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.615104 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" (UID: "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.634182 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.634220 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.634233 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.634246 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfp9\" (UniqueName: \"kubernetes.io/projected/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-kube-api-access-hqfp9\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.634259 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.634270 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.636188 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" (UID: "c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.717741 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-849cfbbc5-ctfjf"] Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.736243 5017 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:32 crc kubenswrapper[5017]: I0129 06:54:32.747718 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-849cfbbc5-ctfjf"] Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.244501 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b944f8dd4-2rk49_a9431961-983b-4257-bbe6-cf1bac1261c0/neutron-api/0.log" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.245050 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.359972 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-config\") pod \"a9431961-983b-4257-bbe6-cf1bac1261c0\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.360502 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzjmm\" (UniqueName: \"kubernetes.io/projected/a9431961-983b-4257-bbe6-cf1bac1261c0-kube-api-access-dzjmm\") pod \"a9431961-983b-4257-bbe6-cf1bac1261c0\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.360541 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-combined-ca-bundle\") pod \"a9431961-983b-4257-bbe6-cf1bac1261c0\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.360634 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-ovndb-tls-certs\") pod \"a9431961-983b-4257-bbe6-cf1bac1261c0\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.360751 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-httpd-config\") pod \"a9431961-983b-4257-bbe6-cf1bac1261c0\" (UID: \"a9431961-983b-4257-bbe6-cf1bac1261c0\") " Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.365768 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a9431961-983b-4257-bbe6-cf1bac1261c0" (UID: "a9431961-983b-4257-bbe6-cf1bac1261c0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.369088 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9431961-983b-4257-bbe6-cf1bac1261c0-kube-api-access-dzjmm" (OuterVolumeSpecName: "kube-api-access-dzjmm") pod "a9431961-983b-4257-bbe6-cf1bac1261c0" (UID: "a9431961-983b-4257-bbe6-cf1bac1261c0"). InnerVolumeSpecName "kube-api-access-dzjmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.412416 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9c69fc6f-43e9-4fe5-b964-8db89e6ab354","Type":"ContainerStarted","Data":"27c14ee221c2ba154a659bac681ce15f66d62c55c0cd0468426e11a64f23d5e9"} Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.412997 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9431961-983b-4257-bbe6-cf1bac1261c0" (UID: "a9431961-983b-4257-bbe6-cf1bac1261c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.417579 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b944f8dd4-2rk49_a9431961-983b-4257-bbe6-cf1bac1261c0/neutron-api/0.log" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.417722 5017 generic.go:334] "Generic (PLEG): container finished" podID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerID="45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00" exitCode=137 Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.418007 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b944f8dd4-2rk49" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.418072 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b944f8dd4-2rk49" event={"ID":"a9431961-983b-4257-bbe6-cf1bac1261c0","Type":"ContainerDied","Data":"45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00"} Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.419138 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b944f8dd4-2rk49" event={"ID":"a9431961-983b-4257-bbe6-cf1bac1261c0","Type":"ContainerDied","Data":"ee89885209f49accb6f3472c677e7e5c9f5dfa4e14311490a83d1cf8fdf548d7"} Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.419167 5017 scope.go:117] "RemoveContainer" containerID="77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.441474 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-config" (OuterVolumeSpecName: "config") pod "a9431961-983b-4257-bbe6-cf1bac1261c0" (UID: "a9431961-983b-4257-bbe6-cf1bac1261c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.441665 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a9431961-983b-4257-bbe6-cf1bac1261c0" (UID: "a9431961-983b-4257-bbe6-cf1bac1261c0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.465460 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.465493 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.465505 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzjmm\" (UniqueName: \"kubernetes.io/projected/a9431961-983b-4257-bbe6-cf1bac1261c0-kube-api-access-dzjmm\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.465517 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.465527 5017 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9431961-983b-4257-bbe6-cf1bac1261c0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.470665 5017 scope.go:117] "RemoveContainer" containerID="45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.494266 5017 scope.go:117] "RemoveContainer" containerID="77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4" Jan 29 06:54:33 crc kubenswrapper[5017]: E0129 06:54:33.494937 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4\": container with ID starting with 77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4 not found: ID does not exist" containerID="77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.495001 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4"} err="failed to get container status \"77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4\": rpc error: code = NotFound desc = could not find container \"77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4\": container with ID starting with 77a23db586c50b66520c229023e26c249b021091965592597f4654cbafd9e9b4 not found: ID does not exist" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.495033 5017 scope.go:117] "RemoveContainer" containerID="45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00" Jan 29 06:54:33 crc kubenswrapper[5017]: E0129 06:54:33.495701 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00\": container with ID starting with 45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00 not found: ID does not exist" containerID="45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.495719 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00"} err="failed to get container status \"45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00\": rpc error: code = NotFound desc = could not find container \"45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00\": container with ID starting with 45e090f0d59664b2e8d3d49df5caae9cb89cd1af2235356719d326f7e426ef00 not found: ID does not exist" Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.793750 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b944f8dd4-2rk49"] Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.804549 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b944f8dd4-2rk49"] Jan 29 06:54:33 crc kubenswrapper[5017]: I0129 06:54:33.944243 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.327898 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" path="/var/lib/kubelet/pods/a9431961-983b-4257-bbe6-cf1bac1261c0/volumes" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.328919 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" path="/var/lib/kubelet/pods/c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e/volumes" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.431263 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9c69fc6f-43e9-4fe5-b964-8db89e6ab354","Type":"ContainerStarted","Data":"999837a7b08863c4bad372817a69589db1cc60b86b13c6914e11866e71643157"} Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.458466 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.4584253609999998 podStartE2EDuration="3.458425361s" podCreationTimestamp="2026-01-29 06:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:34.457524279 +0000 UTC m=+1160.831971889" watchObservedRunningTime="2026-01-29 06:54:34.458425361 +0000 UTC m=+1160.832872971" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.655644 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-bc7969485-9cbzw"] Jan 29 06:54:34 crc kubenswrapper[5017]: E0129 06:54:34.656125 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerName="neutron-api" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.656144 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerName="neutron-api" Jan 29 06:54:34 crc kubenswrapper[5017]: E0129 06:54:34.656159 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerName="neutron-httpd" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.656166 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerName="neutron-httpd" Jan 29 06:54:34 crc kubenswrapper[5017]: E0129 06:54:34.656204 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerName="neutron-httpd" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.656212 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerName="neutron-httpd" Jan 29 06:54:34 crc kubenswrapper[5017]: E0129 06:54:34.656225 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerName="neutron-api" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.656231 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerName="neutron-api" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.656433 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerName="neutron-api" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.656463 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9431961-983b-4257-bbe6-cf1bac1261c0" containerName="neutron-httpd" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.656473 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerName="neutron-api" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.656483 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3baa3ab-8aec-489f-a037-c6a3a9ff2f2e" containerName="neutron-httpd" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.657534 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.662323 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.662326 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.662470 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.688924 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-bc7969485-9cbzw"] Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.750031 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.753551 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.756844 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xmmnf" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.765581 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.766005 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.808058 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.811361 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-config-data\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.811466 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2t6\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-kube-api-access-qq2t6\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.811522 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-etc-swift\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.811621 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-combined-ca-bundle\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.811651 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-internal-tls-certs\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.811675 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-run-httpd\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.811782 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-log-httpd\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.811840 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-public-tls-certs\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914193 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq2t6\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-kube-api-access-qq2t6\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914299 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-etc-swift\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914399 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-combined-ca-bundle\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914431 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-internal-tls-certs\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914473 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp29b\" (UniqueName: \"kubernetes.io/projected/abd151c3-f255-4647-a923-3176a7dae25a-kube-api-access-qp29b\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914514 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-run-httpd\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914634 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-log-httpd\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914669 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config-secret\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914728 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-public-tls-certs\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914785 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914842 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.914927 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-config-data\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.927642 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-config-data\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.931783 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-log-httpd\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.932249 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-run-httpd\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.933445 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-etc-swift\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.935186 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-internal-tls-certs\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.935831 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-combined-ca-bundle\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.935977 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-public-tls-certs\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.956830 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq2t6\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-kube-api-access-qq2t6\") pod \"swift-proxy-bc7969485-9cbzw\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:34 crc kubenswrapper[5017]: I0129 06:54:34.978416 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.017587 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.017681 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.017886 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp29b\" (UniqueName: \"kubernetes.io/projected/abd151c3-f255-4647-a923-3176a7dae25a-kube-api-access-qp29b\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.017993 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config-secret\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.020463 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.023668 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.024188 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config-secret\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.036809 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp29b\" (UniqueName: \"kubernetes.io/projected/abd151c3-f255-4647-a923-3176a7dae25a-kube-api-access-qp29b\") pod \"openstackclient\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " pod="openstack/openstackclient" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.076372 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.660759 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-bc7969485-9cbzw"] Jan 29 06:54:35 crc kubenswrapper[5017]: W0129 06:54:35.663054 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0801349_3235_495b_9747_8ce025aad149.slice/crio-ada61d0fac333958afe435024c69de8af387384a657773148545d3de12159485 WatchSource:0}: Error finding container ada61d0fac333958afe435024c69de8af387384a657773148545d3de12159485: Status 404 returned error can't find the container with id ada61d0fac333958afe435024c69de8af387384a657773148545d3de12159485 Jan 29 06:54:35 crc kubenswrapper[5017]: I0129 06:54:35.702588 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 06:54:35 crc kubenswrapper[5017]: W0129 06:54:35.713571 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabd151c3_f255_4647_a923_3176a7dae25a.slice/crio-32616f36d7ab6a5e5b4f9dc5ba8a7ca16a80a041b8bb153567b428e0ee924ce0 WatchSource:0}: Error finding container 32616f36d7ab6a5e5b4f9dc5ba8a7ca16a80a041b8bb153567b428e0ee924ce0: Status 404 returned error can't find the container with id 32616f36d7ab6a5e5b4f9dc5ba8a7ca16a80a041b8bb153567b428e0ee924ce0 Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.458295 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bc7969485-9cbzw" event={"ID":"a0801349-3235-495b-9747-8ce025aad149","Type":"ContainerStarted","Data":"9d3b90e2a526d03bc001996d30c041b59b2169975c9838eadd4d27870c43ad13"} Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.458669 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bc7969485-9cbzw" event={"ID":"a0801349-3235-495b-9747-8ce025aad149","Type":"ContainerStarted","Data":"dc162fbc000aa89fe180adad58f6d207b657135b6fb5dd62ecbf93d7c4e1bbe0"} Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.458688 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bc7969485-9cbzw" event={"ID":"a0801349-3235-495b-9747-8ce025aad149","Type":"ContainerStarted","Data":"ada61d0fac333958afe435024c69de8af387384a657773148545d3de12159485"} Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.458726 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.458750 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.460218 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"abd151c3-f255-4647-a923-3176a7dae25a","Type":"ContainerStarted","Data":"32616f36d7ab6a5e5b4f9dc5ba8a7ca16a80a041b8bb153567b428e0ee924ce0"} Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.656576 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-bc7969485-9cbzw" podStartSLOduration=2.656544876 podStartE2EDuration="2.656544876s" podCreationTimestamp="2026-01-29 06:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:36.490614257 +0000 UTC m=+1162.865061867" watchObservedRunningTime="2026-01-29 06:54:36.656544876 +0000 UTC m=+1163.030992566" Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.661833 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.662129 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="ceilometer-central-agent" containerID="cri-o://11f8f45d009b860ad648464664ec41c1022a0eb39a7e7977eae5581b5bd4b5c1" gracePeriod=30 Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.663002 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="proxy-httpd" containerID="cri-o://6ae36bd319619d1ed798645f600ff4653b390a5baecd6fe8b754440823362f33" gracePeriod=30 Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.663167 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="sg-core" containerID="cri-o://fd92981114f789e39e1e2ee3c22ebcf65a92291722e32a006127093b113f7f96" gracePeriod=30 Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.663230 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="ceilometer-notification-agent" containerID="cri-o://ece2265dfef822da54e8aac0ea5ceaab11b0c43aac0c371eb513b2a751baca21" gracePeriod=30 Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.671225 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 06:54:36 crc kubenswrapper[5017]: I0129 06:54:36.747729 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 06:54:37 crc kubenswrapper[5017]: I0129 06:54:37.479976 5017 generic.go:334] "Generic (PLEG): container finished" podID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerID="6ae36bd319619d1ed798645f600ff4653b390a5baecd6fe8b754440823362f33" exitCode=0 Jan 29 06:54:37 crc kubenswrapper[5017]: I0129 06:54:37.480385 5017 generic.go:334] "Generic (PLEG): container finished" podID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerID="fd92981114f789e39e1e2ee3c22ebcf65a92291722e32a006127093b113f7f96" exitCode=2 Jan 29 06:54:37 crc kubenswrapper[5017]: I0129 06:54:37.480395 5017 generic.go:334] "Generic (PLEG): container finished" podID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerID="11f8f45d009b860ad648464664ec41c1022a0eb39a7e7977eae5581b5bd4b5c1" exitCode=0 Jan 29 06:54:37 crc kubenswrapper[5017]: I0129 06:54:37.480145 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerDied","Data":"6ae36bd319619d1ed798645f600ff4653b390a5baecd6fe8b754440823362f33"} Jan 29 06:54:37 crc kubenswrapper[5017]: I0129 06:54:37.481121 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerDied","Data":"fd92981114f789e39e1e2ee3c22ebcf65a92291722e32a006127093b113f7f96"} Jan 29 06:54:37 crc kubenswrapper[5017]: I0129 06:54:37.481135 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerDied","Data":"11f8f45d009b860ad648464664ec41c1022a0eb39a7e7977eae5581b5bd4b5c1"} Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.539645 5017 generic.go:334] "Generic (PLEG): container finished" podID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerID="ece2265dfef822da54e8aac0ea5ceaab11b0c43aac0c371eb513b2a751baca21" exitCode=0 Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.539860 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerDied","Data":"ece2265dfef822da54e8aac0ea5ceaab11b0c43aac0c371eb513b2a751baca21"} Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.680175 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.801880 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-scripts\") pod \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.802106 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-config-data\") pod \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.802194 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-log-httpd\") pod \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.802252 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxt7g\" (UniqueName: \"kubernetes.io/projected/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-kube-api-access-lxt7g\") pod \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.802328 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-combined-ca-bundle\") pod \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.802376 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-sg-core-conf-yaml\") pod \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.802414 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-run-httpd\") pod \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\" (UID: \"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453\") " Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.803280 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" (UID: "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.803732 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" (UID: "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.813204 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-scripts" (OuterVolumeSpecName: "scripts") pod "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" (UID: "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.818351 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-kube-api-access-lxt7g" (OuterVolumeSpecName: "kube-api-access-lxt7g") pod "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" (UID: "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453"). InnerVolumeSpecName "kube-api-access-lxt7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.861870 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" (UID: "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.895056 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" (UID: "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.905662 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.905699 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxt7g\" (UniqueName: \"kubernetes.io/projected/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-kube-api-access-lxt7g\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.905713 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.905723 5017 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.905731 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.905739 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:38 crc kubenswrapper[5017]: I0129 06:54:38.927031 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-config-data" (OuterVolumeSpecName: "config-data") pod "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" (UID: "a438aaf2-d0d9-49b0-b3c4-347c3f9f8453"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.007800 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.561285 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a438aaf2-d0d9-49b0-b3c4-347c3f9f8453","Type":"ContainerDied","Data":"bddfc11dccc706d7fdb30e3acee6e1d2b0305d795b14d4c50780355fa616ae11"} Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.561361 5017 scope.go:117] "RemoveContainer" containerID="6ae36bd319619d1ed798645f600ff4653b390a5baecd6fe8b754440823362f33" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.561391 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.593523 5017 scope.go:117] "RemoveContainer" containerID="fd92981114f789e39e1e2ee3c22ebcf65a92291722e32a006127093b113f7f96" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.613477 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.624367 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.626786 5017 scope.go:117] "RemoveContainer" containerID="ece2265dfef822da54e8aac0ea5ceaab11b0c43aac0c371eb513b2a751baca21" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.646339 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:39 crc kubenswrapper[5017]: E0129 06:54:39.647430 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="sg-core" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.647457 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="sg-core" Jan 29 06:54:39 crc kubenswrapper[5017]: E0129 06:54:39.647477 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="ceilometer-central-agent" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.647485 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="ceilometer-central-agent" Jan 29 06:54:39 crc kubenswrapper[5017]: E0129 06:54:39.647500 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="ceilometer-notification-agent" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.647506 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="ceilometer-notification-agent" Jan 29 06:54:39 crc kubenswrapper[5017]: E0129 06:54:39.647520 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="proxy-httpd" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.647527 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="proxy-httpd" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.647711 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="sg-core" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.647728 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="ceilometer-central-agent" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.647736 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="ceilometer-notification-agent" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.647751 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" containerName="proxy-httpd" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.649616 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.653451 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.654243 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.655415 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.670276 5017 scope.go:117] "RemoveContainer" containerID="11f8f45d009b860ad648464664ec41c1022a0eb39a7e7977eae5581b5bd4b5c1" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.723380 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.723748 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-scripts\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.724056 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7dl\" (UniqueName: \"kubernetes.io/projected/443e2637-af52-4409-aca6-c14cdc5c9767-kube-api-access-cv7dl\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.724236 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-log-httpd\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.724331 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-config-data\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.724509 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.724568 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-run-httpd\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.826597 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv7dl\" (UniqueName: \"kubernetes.io/projected/443e2637-af52-4409-aca6-c14cdc5c9767-kube-api-access-cv7dl\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.826720 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-log-httpd\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.826764 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-config-data\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.826823 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.826845 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-run-httpd\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.826878 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.826979 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-scripts\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.827581 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-run-httpd\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.828171 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-log-httpd\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.832104 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.832258 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-config-data\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.832304 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.835585 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-scripts\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:39 crc kubenswrapper[5017]: I0129 06:54:39.849939 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv7dl\" (UniqueName: \"kubernetes.io/projected/443e2637-af52-4409-aca6-c14cdc5c9767-kube-api-access-cv7dl\") pod \"ceilometer-0\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " pod="openstack/ceilometer-0" Jan 29 06:54:40 crc kubenswrapper[5017]: I0129 06:54:39.999861 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:54:40 crc kubenswrapper[5017]: I0129 06:54:40.333920 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a438aaf2-d0d9-49b0-b3c4-347c3f9f8453" path="/var/lib/kubelet/pods/a438aaf2-d0d9-49b0-b3c4-347c3f9f8453/volumes" Jan 29 06:54:40 crc kubenswrapper[5017]: I0129 06:54:40.495584 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:42 crc kubenswrapper[5017]: I0129 06:54:42.020558 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 06:54:44 crc kubenswrapper[5017]: I0129 06:54:44.533157 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:54:44 crc kubenswrapper[5017]: I0129 06:54:44.984488 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:44 crc kubenswrapper[5017]: I0129 06:54:44.987770 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:54:46 crc kubenswrapper[5017]: W0129 06:54:46.602237 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod443e2637_af52_4409_aca6_c14cdc5c9767.slice/crio-7d5fda3479249bd2ac66628fd2516b12d06a80d8cbf345b329dea8d05ad83b67 WatchSource:0}: Error finding container 7d5fda3479249bd2ac66628fd2516b12d06a80d8cbf345b329dea8d05ad83b67: Status 404 returned error can't find the container with id 7d5fda3479249bd2ac66628fd2516b12d06a80d8cbf345b329dea8d05ad83b67 Jan 29 06:54:46 crc kubenswrapper[5017]: I0129 06:54:46.646635 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerStarted","Data":"7d5fda3479249bd2ac66628fd2516b12d06a80d8cbf345b329dea8d05ad83b67"} Jan 29 06:54:47 crc kubenswrapper[5017]: I0129 06:54:47.663509 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerStarted","Data":"26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e"} Jan 29 06:54:47 crc kubenswrapper[5017]: I0129 06:54:47.668174 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"abd151c3-f255-4647-a923-3176a7dae25a","Type":"ContainerStarted","Data":"937202b40c1e1fb6b564b0a310d47bb37664160e11586ea04719f1fc9662dc75"} Jan 29 06:54:47 crc kubenswrapper[5017]: I0129 06:54:47.701870 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.762302628 podStartE2EDuration="13.701839042s" podCreationTimestamp="2026-01-29 06:54:34 +0000 UTC" firstStartedPulling="2026-01-29 06:54:35.71909018 +0000 UTC m=+1162.093537790" lastFinishedPulling="2026-01-29 06:54:46.658626594 +0000 UTC m=+1173.033074204" observedRunningTime="2026-01-29 06:54:47.697551296 +0000 UTC m=+1174.071998906" watchObservedRunningTime="2026-01-29 06:54:47.701839042 +0000 UTC m=+1174.076286662" Jan 29 06:54:48 crc kubenswrapper[5017]: I0129 06:54:48.173376 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:54:48 crc kubenswrapper[5017]: I0129 06:54:48.174137 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerName="glance-log" containerID="cri-o://e64e5c2ac450851ff57a116131b0a50375c4fb0c2fa931a51e57907788f9e3e3" gracePeriod=30 Jan 29 06:54:48 crc kubenswrapper[5017]: I0129 06:54:48.174235 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerName="glance-httpd" containerID="cri-o://2a3e23eeed54c8ed117c78f312d8a254e83795258ad657ac54cd23c444f93ae9" gracePeriod=30 Jan 29 06:54:48 crc kubenswrapper[5017]: I0129 06:54:48.690725 5017 generic.go:334] "Generic (PLEG): container finished" podID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerID="e64e5c2ac450851ff57a116131b0a50375c4fb0c2fa931a51e57907788f9e3e3" exitCode=143 Jan 29 06:54:48 crc kubenswrapper[5017]: I0129 06:54:48.690816 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296518b1-ea38-473c-a5c5-9378dde1f3ae","Type":"ContainerDied","Data":"e64e5c2ac450851ff57a116131b0a50375c4fb0c2fa931a51e57907788f9e3e3"} Jan 29 06:54:48 crc kubenswrapper[5017]: I0129 06:54:48.693800 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerStarted","Data":"be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2"} Jan 29 06:54:48 crc kubenswrapper[5017]: I0129 06:54:48.693843 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerStarted","Data":"b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace"} Jan 29 06:54:49 crc kubenswrapper[5017]: I0129 06:54:49.002850 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:54:49 crc kubenswrapper[5017]: I0129 06:54:49.003159 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerName="glance-log" containerID="cri-o://d7944d62d295e9727fd3d5ae293bc08722b103a1d0a82d234d601cc5e7497025" gracePeriod=30 Jan 29 06:54:49 crc kubenswrapper[5017]: I0129 06:54:49.003664 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerName="glance-httpd" containerID="cri-o://3cbee6ed26d3997c1f3865c063a98a6ce5c0673472463538e82c2d760f23dc03" gracePeriod=30 Jan 29 06:54:49 crc kubenswrapper[5017]: I0129 06:54:49.707049 5017 generic.go:334] "Generic (PLEG): container finished" podID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerID="d7944d62d295e9727fd3d5ae293bc08722b103a1d0a82d234d601cc5e7497025" exitCode=143 Jan 29 06:54:49 crc kubenswrapper[5017]: I0129 06:54:49.707139 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc","Type":"ContainerDied","Data":"d7944d62d295e9727fd3d5ae293bc08722b103a1d0a82d234d601cc5e7497025"} Jan 29 06:54:50 crc kubenswrapper[5017]: I0129 06:54:50.722654 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerStarted","Data":"4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109"} Jan 29 06:54:50 crc kubenswrapper[5017]: I0129 06:54:50.723061 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="ceilometer-notification-agent" containerID="cri-o://b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace" gracePeriod=30 Jan 29 06:54:50 crc kubenswrapper[5017]: I0129 06:54:50.723104 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 06:54:50 crc kubenswrapper[5017]: I0129 06:54:50.722920 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="ceilometer-central-agent" containerID="cri-o://26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e" gracePeriod=30 Jan 29 06:54:50 crc kubenswrapper[5017]: I0129 06:54:50.723094 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="proxy-httpd" containerID="cri-o://4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109" gracePeriod=30 Jan 29 06:54:50 crc kubenswrapper[5017]: I0129 06:54:50.723006 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="sg-core" containerID="cri-o://be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2" gracePeriod=30 Jan 29 06:54:50 crc kubenswrapper[5017]: I0129 06:54:50.783546 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.150937464 podStartE2EDuration="11.783400728s" podCreationTimestamp="2026-01-29 06:54:39 +0000 UTC" firstStartedPulling="2026-01-29 06:54:46.607607263 +0000 UTC m=+1172.982054873" lastFinishedPulling="2026-01-29 06:54:50.240070527 +0000 UTC m=+1176.614518137" observedRunningTime="2026-01-29 06:54:50.759945209 +0000 UTC m=+1177.134392839" watchObservedRunningTime="2026-01-29 06:54:50.783400728 +0000 UTC m=+1177.157848368" Jan 29 06:54:51 crc kubenswrapper[5017]: I0129 06:54:51.749572 5017 generic.go:334] "Generic (PLEG): container finished" podID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerID="2a3e23eeed54c8ed117c78f312d8a254e83795258ad657ac54cd23c444f93ae9" exitCode=0 Jan 29 06:54:51 crc kubenswrapper[5017]: I0129 06:54:51.750099 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296518b1-ea38-473c-a5c5-9378dde1f3ae","Type":"ContainerDied","Data":"2a3e23eeed54c8ed117c78f312d8a254e83795258ad657ac54cd23c444f93ae9"} Jan 29 06:54:51 crc kubenswrapper[5017]: I0129 06:54:51.753886 5017 generic.go:334] "Generic (PLEG): container finished" podID="443e2637-af52-4409-aca6-c14cdc5c9767" containerID="4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109" exitCode=0 Jan 29 06:54:51 crc kubenswrapper[5017]: I0129 06:54:51.753913 5017 generic.go:334] "Generic (PLEG): container finished" podID="443e2637-af52-4409-aca6-c14cdc5c9767" containerID="be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2" exitCode=2 Jan 29 06:54:51 crc kubenswrapper[5017]: I0129 06:54:51.754018 5017 generic.go:334] "Generic (PLEG): container finished" podID="443e2637-af52-4409-aca6-c14cdc5c9767" containerID="b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace" exitCode=0 Jan 29 06:54:51 crc kubenswrapper[5017]: I0129 06:54:51.753942 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerDied","Data":"4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109"} Jan 29 06:54:51 crc kubenswrapper[5017]: I0129 06:54:51.754090 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerDied","Data":"be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2"} Jan 29 06:54:51 crc kubenswrapper[5017]: I0129 06:54:51.754106 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerDied","Data":"b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace"} Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.048383 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.133010 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7pll\" (UniqueName: \"kubernetes.io/projected/296518b1-ea38-473c-a5c5-9378dde1f3ae-kube-api-access-q7pll\") pod \"296518b1-ea38-473c-a5c5-9378dde1f3ae\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.133191 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-scripts\") pod \"296518b1-ea38-473c-a5c5-9378dde1f3ae\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.133233 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-combined-ca-bundle\") pod \"296518b1-ea38-473c-a5c5-9378dde1f3ae\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.133260 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"296518b1-ea38-473c-a5c5-9378dde1f3ae\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.134670 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-httpd-run\") pod \"296518b1-ea38-473c-a5c5-9378dde1f3ae\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.134764 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-logs\") pod \"296518b1-ea38-473c-a5c5-9378dde1f3ae\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.134831 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-config-data\") pod \"296518b1-ea38-473c-a5c5-9378dde1f3ae\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.134923 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-public-tls-certs\") pod \"296518b1-ea38-473c-a5c5-9378dde1f3ae\" (UID: \"296518b1-ea38-473c-a5c5-9378dde1f3ae\") " Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.134959 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "296518b1-ea38-473c-a5c5-9378dde1f3ae" (UID: "296518b1-ea38-473c-a5c5-9378dde1f3ae"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.135371 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-logs" (OuterVolumeSpecName: "logs") pod "296518b1-ea38-473c-a5c5-9378dde1f3ae" (UID: "296518b1-ea38-473c-a5c5-9378dde1f3ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.135603 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.135649 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296518b1-ea38-473c-a5c5-9378dde1f3ae-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.144782 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-scripts" (OuterVolumeSpecName: "scripts") pod "296518b1-ea38-473c-a5c5-9378dde1f3ae" (UID: "296518b1-ea38-473c-a5c5-9378dde1f3ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.147137 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "296518b1-ea38-473c-a5c5-9378dde1f3ae" (UID: "296518b1-ea38-473c-a5c5-9378dde1f3ae"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.161589 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296518b1-ea38-473c-a5c5-9378dde1f3ae-kube-api-access-q7pll" (OuterVolumeSpecName: "kube-api-access-q7pll") pod "296518b1-ea38-473c-a5c5-9378dde1f3ae" (UID: "296518b1-ea38-473c-a5c5-9378dde1f3ae"). InnerVolumeSpecName "kube-api-access-q7pll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.177072 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "296518b1-ea38-473c-a5c5-9378dde1f3ae" (UID: "296518b1-ea38-473c-a5c5-9378dde1f3ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.214691 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "296518b1-ea38-473c-a5c5-9378dde1f3ae" (UID: "296518b1-ea38-473c-a5c5-9378dde1f3ae"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.233047 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-config-data" (OuterVolumeSpecName: "config-data") pod "296518b1-ea38-473c-a5c5-9378dde1f3ae" (UID: "296518b1-ea38-473c-a5c5-9378dde1f3ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.237421 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.237474 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.237501 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7pll\" (UniqueName: \"kubernetes.io/projected/296518b1-ea38-473c-a5c5-9378dde1f3ae-kube-api-access-q7pll\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.237513 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.237526 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296518b1-ea38-473c-a5c5-9378dde1f3ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.237573 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.259462 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.340193 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.795690 5017 generic.go:334] "Generic (PLEG): container finished" podID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerID="3cbee6ed26d3997c1f3865c063a98a6ce5c0673472463538e82c2d760f23dc03" exitCode=0 Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.796239 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc","Type":"ContainerDied","Data":"3cbee6ed26d3997c1f3865c063a98a6ce5c0673472463538e82c2d760f23dc03"} Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.810865 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296518b1-ea38-473c-a5c5-9378dde1f3ae","Type":"ContainerDied","Data":"f600f50cf4491301e62c2ea8558346bc0eb7e94e3b8fd6364c32e7a818543da3"} Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.810944 5017 scope.go:117] "RemoveContainer" containerID="2a3e23eeed54c8ed117c78f312d8a254e83795258ad657ac54cd23c444f93ae9" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.811163 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.938138 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.957753 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.977566 5017 scope.go:117] "RemoveContainer" containerID="e64e5c2ac450851ff57a116131b0a50375c4fb0c2fa931a51e57907788f9e3e3" Jan 29 06:54:52 crc kubenswrapper[5017]: I0129 06:54:52.983418 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.015318 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:54:53 crc kubenswrapper[5017]: E0129 06:54:53.015970 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerName="glance-log" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.015988 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerName="glance-log" Jan 29 06:54:53 crc kubenswrapper[5017]: E0129 06:54:53.016004 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerName="glance-httpd" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.016010 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerName="glance-httpd" Jan 29 06:54:53 crc kubenswrapper[5017]: E0129 06:54:53.016028 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerName="glance-log" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.016036 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerName="glance-log" Jan 29 06:54:53 crc kubenswrapper[5017]: E0129 06:54:53.016064 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerName="glance-httpd" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.016072 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerName="glance-httpd" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.016286 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerName="glance-log" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.016300 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerName="glance-log" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.016324 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" containerName="glance-httpd" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.016334 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="296518b1-ea38-473c-a5c5-9378dde1f3ae" containerName="glance-httpd" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.017735 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.026253 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.027335 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.038521 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.056351 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-config-data\") pod \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.056806 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-internal-tls-certs\") pod \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.057180 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-combined-ca-bundle\") pod \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.057313 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.057420 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-scripts\") pod \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.057514 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-logs\") pod \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.057699 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-httpd-run\") pod \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.057848 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmk85\" (UniqueName: \"kubernetes.io/projected/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-kube-api-access-nmk85\") pod \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\" (UID: \"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc\") " Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.067332 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-logs" (OuterVolumeSpecName: "logs") pod "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" (UID: "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.070175 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-scripts" (OuterVolumeSpecName: "scripts") pod "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" (UID: "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.070191 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" (UID: "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.074132 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-kube-api-access-nmk85" (OuterVolumeSpecName: "kube-api-access-nmk85") pod "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" (UID: "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc"). InnerVolumeSpecName "kube-api-access-nmk85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.074371 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" (UID: "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.135051 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" (UID: "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.139118 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" (UID: "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.140142 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-config-data" (OuterVolumeSpecName: "config-data") pod "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" (UID: "1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.159944 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.160260 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.160512 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.160680 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.160772 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.161628 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.161783 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-logs\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.161998 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wjdr\" (UniqueName: \"kubernetes.io/projected/9df7814f-338e-40fb-95aa-f93dfa8307d6-kube-api-access-9wjdr\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.162218 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.162232 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.162263 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.162273 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.162283 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.162292 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.162301 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmk85\" (UniqueName: \"kubernetes.io/projected/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-kube-api-access-nmk85\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.162312 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.188191 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.264622 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.265033 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.265076 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.265103 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.265159 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.265190 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-logs\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.265234 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wjdr\" (UniqueName: \"kubernetes.io/projected/9df7814f-338e-40fb-95aa-f93dfa8307d6-kube-api-access-9wjdr\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.265276 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.265401 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.266619 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.266871 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-logs\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.267037 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.273506 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.273609 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.274050 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.275018 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.292772 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wjdr\" (UniqueName: \"kubernetes.io/projected/9df7814f-338e-40fb-95aa-f93dfa8307d6-kube-api-access-9wjdr\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.309655 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.341475 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.824882 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc","Type":"ContainerDied","Data":"f059363742b4bb92f6c4843544ae4c4c71a363059e06e95e90d294974528a3ed"} Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.824921 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.825427 5017 scope.go:117] "RemoveContainer" containerID="3cbee6ed26d3997c1f3865c063a98a6ce5c0673472463538e82c2d760f23dc03" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.869734 5017 scope.go:117] "RemoveContainer" containerID="d7944d62d295e9727fd3d5ae293bc08722b103a1d0a82d234d601cc5e7497025" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.871193 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.881031 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.915685 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.917430 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.920579 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 06:54:53 crc kubenswrapper[5017]: I0129 06:54:53.920860 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.004402 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.086742 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-logs\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.086792 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.086826 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.086893 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.086921 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.086944 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.087022 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6g5c\" (UniqueName: \"kubernetes.io/projected/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-kube-api-access-l6g5c\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.087046 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.098038 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.188504 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-logs\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.189157 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-logs\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.189176 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.189254 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.189494 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.189560 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.189615 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.189756 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.189772 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6g5c\" (UniqueName: \"kubernetes.io/projected/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-kube-api-access-l6g5c\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.189818 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.190319 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.198588 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.198725 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.199482 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.200667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.210844 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6g5c\" (UniqueName: \"kubernetes.io/projected/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-kube-api-access-l6g5c\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.231763 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.296931 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.331668 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc" path="/var/lib/kubelet/pods/1c3408ec-55f8-4e9b-8e76-92e0d6e6dabc/volumes" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.332386 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296518b1-ea38-473c-a5c5-9378dde1f3ae" path="/var/lib/kubelet/pods/296518b1-ea38-473c-a5c5-9378dde1f3ae/volumes" Jan 29 06:54:54 crc kubenswrapper[5017]: I0129 06:54:54.855680 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df7814f-338e-40fb-95aa-f93dfa8307d6","Type":"ContainerStarted","Data":"4cb2e309cb9d136517ca8e41e3becfde536e7b8da16a8490be5ddf8950013877"} Jan 29 06:54:55 crc kubenswrapper[5017]: I0129 06:54:55.011519 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:54:55 crc kubenswrapper[5017]: W0129 06:54:55.021650 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode41c27f8_0c27_4e3d_83b1_62a61abb4faf.slice/crio-8ad921e5528d4c71a7008240c3da3ab58d2f022a1e3bdfbf68d52189fee9984b WatchSource:0}: Error finding container 8ad921e5528d4c71a7008240c3da3ab58d2f022a1e3bdfbf68d52189fee9984b: Status 404 returned error can't find the container with id 8ad921e5528d4c71a7008240c3da3ab58d2f022a1e3bdfbf68d52189fee9984b Jan 29 06:54:55 crc kubenswrapper[5017]: I0129 06:54:55.869156 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e41c27f8-0c27-4e3d-83b1-62a61abb4faf","Type":"ContainerStarted","Data":"8ad921e5528d4c71a7008240c3da3ab58d2f022a1e3bdfbf68d52189fee9984b"} Jan 29 06:54:55 crc kubenswrapper[5017]: I0129 06:54:55.872509 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df7814f-338e-40fb-95aa-f93dfa8307d6","Type":"ContainerStarted","Data":"9faf0d173c0b59296fed359030686d4a096d13b44a22d7aecdca4136e3a15a7a"} Jan 29 06:54:55 crc kubenswrapper[5017]: I0129 06:54:55.872545 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df7814f-338e-40fb-95aa-f93dfa8307d6","Type":"ContainerStarted","Data":"0c316a09cd764cd0d7717d88d306f363aee8a406a7aab49e4974af5176af2934"} Jan 29 06:54:55 crc kubenswrapper[5017]: I0129 06:54:55.910717 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.910696564 podStartE2EDuration="3.910696564s" podCreationTimestamp="2026-01-29 06:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:55.909038204 +0000 UTC m=+1182.283485814" watchObservedRunningTime="2026-01-29 06:54:55.910696564 +0000 UTC m=+1182.285144174" Jan 29 06:54:56 crc kubenswrapper[5017]: I0129 06:54:56.886222 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e41c27f8-0c27-4e3d-83b1-62a61abb4faf","Type":"ContainerStarted","Data":"dbfb5839d0de6937d94e1b06808176fec9fce89e3d52e262a3d51db47ee776af"} Jan 29 06:54:56 crc kubenswrapper[5017]: I0129 06:54:56.886988 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e41c27f8-0c27-4e3d-83b1-62a61abb4faf","Type":"ContainerStarted","Data":"e97d4813efcf24ccacbdbd3a38be06d62a0d548850293f33dfd1414e7cf3dbe7"} Jan 29 06:54:56 crc kubenswrapper[5017]: I0129 06:54:56.928238 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.928206958 podStartE2EDuration="3.928206958s" podCreationTimestamp="2026-01-29 06:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:54:56.918009356 +0000 UTC m=+1183.292456966" watchObservedRunningTime="2026-01-29 06:54:56.928206958 +0000 UTC m=+1183.302654578" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.051420 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fgmwl"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.053193 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.057872 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcgn2\" (UniqueName: \"kubernetes.io/projected/23be4105-cd73-4c7f-b967-8cac7cf8451d-kube-api-access-zcgn2\") pod \"nova-api-db-create-fgmwl\" (UID: \"23be4105-cd73-4c7f-b967-8cac7cf8451d\") " pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.058031 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23be4105-cd73-4c7f-b967-8cac7cf8451d-operator-scripts\") pod \"nova-api-db-create-fgmwl\" (UID: \"23be4105-cd73-4c7f-b967-8cac7cf8451d\") " pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.077309 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fgmwl"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.144104 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9p92z"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.145758 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.157806 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9p92z"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.160622 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23be4105-cd73-4c7f-b967-8cac7cf8451d-operator-scripts\") pod \"nova-api-db-create-fgmwl\" (UID: \"23be4105-cd73-4c7f-b967-8cac7cf8451d\") " pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.160718 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcgn2\" (UniqueName: \"kubernetes.io/projected/23be4105-cd73-4c7f-b967-8cac7cf8451d-kube-api-access-zcgn2\") pod \"nova-api-db-create-fgmwl\" (UID: \"23be4105-cd73-4c7f-b967-8cac7cf8451d\") " pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.162345 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23be4105-cd73-4c7f-b967-8cac7cf8451d-operator-scripts\") pod \"nova-api-db-create-fgmwl\" (UID: \"23be4105-cd73-4c7f-b967-8cac7cf8451d\") " pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.195629 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcgn2\" (UniqueName: \"kubernetes.io/projected/23be4105-cd73-4c7f-b967-8cac7cf8451d-kube-api-access-zcgn2\") pod \"nova-api-db-create-fgmwl\" (UID: \"23be4105-cd73-4c7f-b967-8cac7cf8451d\") " pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.262815 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-operator-scripts\") pod \"nova-cell0-db-create-9p92z\" (UID: \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\") " pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.263280 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlf6\" (UniqueName: \"kubernetes.io/projected/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-kube-api-access-9xlf6\") pod \"nova-cell0-db-create-9p92z\" (UID: \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\") " pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.270463 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0fca-account-create-update-pgcmp"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.272420 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.278081 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.284077 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0fca-account-create-update-pgcmp"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.362265 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6lvps"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.364871 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xlf6\" (UniqueName: \"kubernetes.io/projected/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-kube-api-access-9xlf6\") pod \"nova-cell0-db-create-9p92z\" (UID: \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\") " pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.365008 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-operator-scripts\") pod \"nova-cell0-db-create-9p92z\" (UID: \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\") " pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.366050 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-operator-scripts\") pod \"nova-cell0-db-create-9p92z\" (UID: \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\") " pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.369058 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.381381 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6lvps"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.384574 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.385631 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xlf6\" (UniqueName: \"kubernetes.io/projected/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-kube-api-access-9xlf6\") pod \"nova-cell0-db-create-9p92z\" (UID: \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\") " pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.467505 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecc785a3-2b30-4a73-b98a-1f6d405efa60-operator-scripts\") pod \"nova-api-0fca-account-create-update-pgcmp\" (UID: \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\") " pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.467623 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44df6b3-8b1f-4004-9629-46412a17cbf7-operator-scripts\") pod \"nova-cell1-db-create-6lvps\" (UID: \"c44df6b3-8b1f-4004-9629-46412a17cbf7\") " pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.467741 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jmrm\" (UniqueName: \"kubernetes.io/projected/ecc785a3-2b30-4a73-b98a-1f6d405efa60-kube-api-access-4jmrm\") pod \"nova-api-0fca-account-create-update-pgcmp\" (UID: \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\") " pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.467771 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m64x\" (UniqueName: \"kubernetes.io/projected/c44df6b3-8b1f-4004-9629-46412a17cbf7-kube-api-access-9m64x\") pod \"nova-cell1-db-create-6lvps\" (UID: \"c44df6b3-8b1f-4004-9629-46412a17cbf7\") " pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.474553 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9727-account-create-update-khlxg"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.476093 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.477874 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.481384 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.505801 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9727-account-create-update-khlxg"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.569635 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92wx6\" (UniqueName: \"kubernetes.io/projected/15d4a207-f6d2-48ce-9065-b3438a37b46d-kube-api-access-92wx6\") pod \"nova-cell0-9727-account-create-update-khlxg\" (UID: \"15d4a207-f6d2-48ce-9065-b3438a37b46d\") " pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.573389 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jmrm\" (UniqueName: \"kubernetes.io/projected/ecc785a3-2b30-4a73-b98a-1f6d405efa60-kube-api-access-4jmrm\") pod \"nova-api-0fca-account-create-update-pgcmp\" (UID: \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\") " pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.573460 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m64x\" (UniqueName: \"kubernetes.io/projected/c44df6b3-8b1f-4004-9629-46412a17cbf7-kube-api-access-9m64x\") pod \"nova-cell1-db-create-6lvps\" (UID: \"c44df6b3-8b1f-4004-9629-46412a17cbf7\") " pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.573520 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d4a207-f6d2-48ce-9065-b3438a37b46d-operator-scripts\") pod \"nova-cell0-9727-account-create-update-khlxg\" (UID: \"15d4a207-f6d2-48ce-9065-b3438a37b46d\") " pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.573707 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecc785a3-2b30-4a73-b98a-1f6d405efa60-operator-scripts\") pod \"nova-api-0fca-account-create-update-pgcmp\" (UID: \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\") " pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.573875 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44df6b3-8b1f-4004-9629-46412a17cbf7-operator-scripts\") pod \"nova-cell1-db-create-6lvps\" (UID: \"c44df6b3-8b1f-4004-9629-46412a17cbf7\") " pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.575133 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44df6b3-8b1f-4004-9629-46412a17cbf7-operator-scripts\") pod \"nova-cell1-db-create-6lvps\" (UID: \"c44df6b3-8b1f-4004-9629-46412a17cbf7\") " pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.575236 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecc785a3-2b30-4a73-b98a-1f6d405efa60-operator-scripts\") pod \"nova-api-0fca-account-create-update-pgcmp\" (UID: \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\") " pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.601863 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m64x\" (UniqueName: \"kubernetes.io/projected/c44df6b3-8b1f-4004-9629-46412a17cbf7-kube-api-access-9m64x\") pod \"nova-cell1-db-create-6lvps\" (UID: \"c44df6b3-8b1f-4004-9629-46412a17cbf7\") " pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.602328 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jmrm\" (UniqueName: \"kubernetes.io/projected/ecc785a3-2b30-4a73-b98a-1f6d405efa60-kube-api-access-4jmrm\") pod \"nova-api-0fca-account-create-update-pgcmp\" (UID: \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\") " pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.674009 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-cx6sh"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.675911 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.678679 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92wx6\" (UniqueName: \"kubernetes.io/projected/15d4a207-f6d2-48ce-9065-b3438a37b46d-kube-api-access-92wx6\") pod \"nova-cell0-9727-account-create-update-khlxg\" (UID: \"15d4a207-f6d2-48ce-9065-b3438a37b46d\") " pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.678746 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d4a207-f6d2-48ce-9065-b3438a37b46d-operator-scripts\") pod \"nova-cell0-9727-account-create-update-khlxg\" (UID: \"15d4a207-f6d2-48ce-9065-b3438a37b46d\") " pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.679527 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.681550 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d4a207-f6d2-48ce-9065-b3438a37b46d-operator-scripts\") pod \"nova-cell0-9727-account-create-update-khlxg\" (UID: \"15d4a207-f6d2-48ce-9065-b3438a37b46d\") " pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.684265 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-cx6sh"] Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.692001 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.714644 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92wx6\" (UniqueName: \"kubernetes.io/projected/15d4a207-f6d2-48ce-9065-b3438a37b46d-kube-api-access-92wx6\") pod \"nova-cell0-9727-account-create-update-khlxg\" (UID: \"15d4a207-f6d2-48ce-9065-b3438a37b46d\") " pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.782360 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67befda-4537-4dc6-bf3d-c7f971a7b825-operator-scripts\") pod \"nova-cell1-fd15-account-create-update-cx6sh\" (UID: \"b67befda-4537-4dc6-bf3d-c7f971a7b825\") " pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.782453 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqv2f\" (UniqueName: \"kubernetes.io/projected/b67befda-4537-4dc6-bf3d-c7f971a7b825-kube-api-access-pqv2f\") pod \"nova-cell1-fd15-account-create-update-cx6sh\" (UID: \"b67befda-4537-4dc6-bf3d-c7f971a7b825\") " pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.801132 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.884049 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67befda-4537-4dc6-bf3d-c7f971a7b825-operator-scripts\") pod \"nova-cell1-fd15-account-create-update-cx6sh\" (UID: \"b67befda-4537-4dc6-bf3d-c7f971a7b825\") " pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.884127 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqv2f\" (UniqueName: \"kubernetes.io/projected/b67befda-4537-4dc6-bf3d-c7f971a7b825-kube-api-access-pqv2f\") pod \"nova-cell1-fd15-account-create-update-cx6sh\" (UID: \"b67befda-4537-4dc6-bf3d-c7f971a7b825\") " pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.885137 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67befda-4537-4dc6-bf3d-c7f971a7b825-operator-scripts\") pod \"nova-cell1-fd15-account-create-update-cx6sh\" (UID: \"b67befda-4537-4dc6-bf3d-c7f971a7b825\") " pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.899833 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:54:57 crc kubenswrapper[5017]: I0129 06:54:57.913213 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqv2f\" (UniqueName: \"kubernetes.io/projected/b67befda-4537-4dc6-bf3d-c7f971a7b825-kube-api-access-pqv2f\") pod \"nova-cell1-fd15-account-create-update-cx6sh\" (UID: \"b67befda-4537-4dc6-bf3d-c7f971a7b825\") " pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.026614 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.056750 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fgmwl"] Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.090141 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9p92z"] Jan 29 06:54:58 crc kubenswrapper[5017]: W0129 06:54:58.095341 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0b48b8f_8c59_4c86_a4fd_b1b408d6dbae.slice/crio-e94673989230b5876eff68759dda4f5279eb10f95d3c5752c7b8f4c587a837ff WatchSource:0}: Error finding container e94673989230b5876eff68759dda4f5279eb10f95d3c5752c7b8f4c587a837ff: Status 404 returned error can't find the container with id e94673989230b5876eff68759dda4f5279eb10f95d3c5752c7b8f4c587a837ff Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.218681 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6lvps"] Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.405871 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0fca-account-create-update-pgcmp"] Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.559978 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9727-account-create-update-khlxg"] Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.845289 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-cx6sh"] Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.929983 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9p92z" event={"ID":"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae","Type":"ContainerStarted","Data":"e94673989230b5876eff68759dda4f5279eb10f95d3c5752c7b8f4c587a837ff"} Jan 29 06:54:58 crc kubenswrapper[5017]: W0129 06:54:58.932644 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb67befda_4537_4dc6_bf3d_c7f971a7b825.slice/crio-a380ee156bc511128bbb12c77d4ebfc34e78eb5a5c966ea47298180d445b47b9 WatchSource:0}: Error finding container a380ee156bc511128bbb12c77d4ebfc34e78eb5a5c966ea47298180d445b47b9: Status 404 returned error can't find the container with id a380ee156bc511128bbb12c77d4ebfc34e78eb5a5c966ea47298180d445b47b9 Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.932766 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6lvps" event={"ID":"c44df6b3-8b1f-4004-9629-46412a17cbf7","Type":"ContainerStarted","Data":"e4bf07c894ebbd9e8a5617d0612efddff781ca113ac5b4251afb8a61e4c5209b"} Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.939656 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9727-account-create-update-khlxg" event={"ID":"15d4a207-f6d2-48ce-9065-b3438a37b46d","Type":"ContainerStarted","Data":"97d6a9faf74e465fce9ec1a032aa740de09b27ad0dc8405f6b9111c7aaeac3e6"} Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.940801 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0fca-account-create-update-pgcmp" event={"ID":"ecc785a3-2b30-4a73-b98a-1f6d405efa60","Type":"ContainerStarted","Data":"a0e1635187bae42930c7ff3b46d0d99b5ffd89cc8c4692dbb2392120a6137be8"} Jan 29 06:54:58 crc kubenswrapper[5017]: I0129 06:54:58.942189 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fgmwl" event={"ID":"23be4105-cd73-4c7f-b967-8cac7cf8451d","Type":"ContainerStarted","Data":"a0e68008651026709d252df4dca5b7a4d831689cd191ed66705bbbfb6bbe81ed"} Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.532194 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.679103 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-scripts\") pod \"443e2637-af52-4409-aca6-c14cdc5c9767\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.679241 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-config-data\") pod \"443e2637-af52-4409-aca6-c14cdc5c9767\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.679299 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-combined-ca-bundle\") pod \"443e2637-af52-4409-aca6-c14cdc5c9767\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.679350 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-sg-core-conf-yaml\") pod \"443e2637-af52-4409-aca6-c14cdc5c9767\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.679378 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-log-httpd\") pod \"443e2637-af52-4409-aca6-c14cdc5c9767\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.679425 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-run-httpd\") pod \"443e2637-af52-4409-aca6-c14cdc5c9767\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.679525 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv7dl\" (UniqueName: \"kubernetes.io/projected/443e2637-af52-4409-aca6-c14cdc5c9767-kube-api-access-cv7dl\") pod \"443e2637-af52-4409-aca6-c14cdc5c9767\" (UID: \"443e2637-af52-4409-aca6-c14cdc5c9767\") " Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.681877 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "443e2637-af52-4409-aca6-c14cdc5c9767" (UID: "443e2637-af52-4409-aca6-c14cdc5c9767"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:54:59 crc kubenswrapper[5017]: I0129 06:54:59.682530 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "443e2637-af52-4409-aca6-c14cdc5c9767" (UID: "443e2637-af52-4409-aca6-c14cdc5c9767"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.027100 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-scripts" (OuterVolumeSpecName: "scripts") pod "443e2637-af52-4409-aca6-c14cdc5c9767" (UID: "443e2637-af52-4409-aca6-c14cdc5c9767"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.028880 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/443e2637-af52-4409-aca6-c14cdc5c9767-kube-api-access-cv7dl" (OuterVolumeSpecName: "kube-api-access-cv7dl") pod "443e2637-af52-4409-aca6-c14cdc5c9767" (UID: "443e2637-af52-4409-aca6-c14cdc5c9767"). InnerVolumeSpecName "kube-api-access-cv7dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.033411 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv7dl\" (UniqueName: \"kubernetes.io/projected/443e2637-af52-4409-aca6-c14cdc5c9767-kube-api-access-cv7dl\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.033437 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.033447 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.033455 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/443e2637-af52-4409-aca6-c14cdc5c9767-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.061326 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fgmwl" event={"ID":"23be4105-cd73-4c7f-b967-8cac7cf8451d","Type":"ContainerStarted","Data":"3dfcd47ab3cac7b764900d044b1e708577dae1b232deab2ffb3281399ab171ba"} Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.061482 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "443e2637-af52-4409-aca6-c14cdc5c9767" (UID: "443e2637-af52-4409-aca6-c14cdc5c9767"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.091492 5017 generic.go:334] "Generic (PLEG): container finished" podID="443e2637-af52-4409-aca6-c14cdc5c9767" containerID="26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e" exitCode=0 Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.091599 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerDied","Data":"26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e"} Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.091641 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"443e2637-af52-4409-aca6-c14cdc5c9767","Type":"ContainerDied","Data":"7d5fda3479249bd2ac66628fd2516b12d06a80d8cbf345b329dea8d05ad83b67"} Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.092508 5017 scope.go:117] "RemoveContainer" containerID="4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.093691 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.113540 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-fgmwl" podStartSLOduration=3.113507956 podStartE2EDuration="3.113507956s" podCreationTimestamp="2026-01-29 06:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:00.098580568 +0000 UTC m=+1186.473028198" watchObservedRunningTime="2026-01-29 06:55:00.113507956 +0000 UTC m=+1186.487955576" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.114278 5017 generic.go:334] "Generic (PLEG): container finished" podID="a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae" containerID="a87ac4d6fcd2f35bc4f4ce7965fee1e0793f77b9c950d2afd37c8a0be776737e" exitCode=0 Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.114423 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9p92z" event={"ID":"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae","Type":"ContainerDied","Data":"a87ac4d6fcd2f35bc4f4ce7965fee1e0793f77b9c950d2afd37c8a0be776737e"} Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.121121 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "443e2637-af52-4409-aca6-c14cdc5c9767" (UID: "443e2637-af52-4409-aca6-c14cdc5c9767"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.137830 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.137894 5017 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.140414 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9727-account-create-update-khlxg" event={"ID":"15d4a207-f6d2-48ce-9065-b3438a37b46d","Type":"ContainerStarted","Data":"b76b348c9d2423b65dbe5c304b35ec3d820a382c48b70c033fa945d81791a518"} Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.148341 5017 generic.go:334] "Generic (PLEG): container finished" podID="c44df6b3-8b1f-4004-9629-46412a17cbf7" containerID="0fb96d66273e80c15d3617b63111d378c6fd6702c701289bf872838499797ba3" exitCode=0 Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.148454 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6lvps" event={"ID":"c44df6b3-8b1f-4004-9629-46412a17cbf7","Type":"ContainerDied","Data":"0fb96d66273e80c15d3617b63111d378c6fd6702c701289bf872838499797ba3"} Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.151179 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0fca-account-create-update-pgcmp" event={"ID":"ecc785a3-2b30-4a73-b98a-1f6d405efa60","Type":"ContainerStarted","Data":"53c11f2f216b2c0277625e17b5979d133bf92078884d2553967dc21bb5b5ee56"} Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.153846 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" event={"ID":"b67befda-4537-4dc6-bf3d-c7f971a7b825","Type":"ContainerStarted","Data":"d58e4b20a14bff37480389b7c1c562703d0a87748863f2677e04997a3a47605b"} Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.153887 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" event={"ID":"b67befda-4537-4dc6-bf3d-c7f971a7b825","Type":"ContainerStarted","Data":"a380ee156bc511128bbb12c77d4ebfc34e78eb5a5c966ea47298180d445b47b9"} Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.171432 5017 scope.go:117] "RemoveContainer" containerID="be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.183142 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-9727-account-create-update-khlxg" podStartSLOduration=3.183106456 podStartE2EDuration="3.183106456s" podCreationTimestamp="2026-01-29 06:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:00.158156739 +0000 UTC m=+1186.532604369" watchObservedRunningTime="2026-01-29 06:55:00.183106456 +0000 UTC m=+1186.557554066" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.205232 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" podStartSLOduration=3.205177751 podStartE2EDuration="3.205177751s" podCreationTimestamp="2026-01-29 06:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:00.177230311 +0000 UTC m=+1186.551677921" watchObservedRunningTime="2026-01-29 06:55:00.205177751 +0000 UTC m=+1186.579625381" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.225022 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.234060 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0fca-account-create-update-pgcmp" podStartSLOduration=3.234034483 podStartE2EDuration="3.234034483s" podCreationTimestamp="2026-01-29 06:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:00.212641625 +0000 UTC m=+1186.587089235" watchObservedRunningTime="2026-01-29 06:55:00.234034483 +0000 UTC m=+1186.608482093" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.234409 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-config-data" (OuterVolumeSpecName: "config-data") pod "443e2637-af52-4409-aca6-c14cdc5c9767" (UID: "443e2637-af52-4409-aca6-c14cdc5c9767"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.239827 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443e2637-af52-4409-aca6-c14cdc5c9767-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.256719 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.357743 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8548d8d696-gk4rx"] Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.364320 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8548d8d696-gk4rx" podUID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerName="placement-log" containerID="cri-o://fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef" gracePeriod=30 Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.364897 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8548d8d696-gk4rx" podUID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerName="placement-api" containerID="cri-o://8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8" gracePeriod=30 Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.439671 5017 scope.go:117] "RemoveContainer" containerID="b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.491829 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.523701 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.542635 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:00 crc kubenswrapper[5017]: E0129 06:55:00.543750 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="proxy-httpd" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.543774 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="proxy-httpd" Jan 29 06:55:00 crc kubenswrapper[5017]: E0129 06:55:00.543821 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="ceilometer-central-agent" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.543831 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="ceilometer-central-agent" Jan 29 06:55:00 crc kubenswrapper[5017]: E0129 06:55:00.543871 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="ceilometer-notification-agent" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.543882 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="ceilometer-notification-agent" Jan 29 06:55:00 crc kubenswrapper[5017]: E0129 06:55:00.543906 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="sg-core" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.543912 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="sg-core" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.544366 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="proxy-httpd" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.544409 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="ceilometer-notification-agent" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.544421 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="ceilometer-central-agent" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.544443 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" containerName="sg-core" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.548558 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.561641 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.561932 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.569019 5017 scope.go:117] "RemoveContainer" containerID="26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.585366 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.612005 5017 scope.go:117] "RemoveContainer" containerID="4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109" Jan 29 06:55:00 crc kubenswrapper[5017]: E0129 06:55:00.612666 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109\": container with ID starting with 4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109 not found: ID does not exist" containerID="4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.612781 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109"} err="failed to get container status \"4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109\": rpc error: code = NotFound desc = could not find container \"4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109\": container with ID starting with 4feef902af1ac52f9167a6d6cd5b6466c6997066a7a51013aa9155f607b92109 not found: ID does not exist" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.612832 5017 scope.go:117] "RemoveContainer" containerID="be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2" Jan 29 06:55:00 crc kubenswrapper[5017]: E0129 06:55:00.613752 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2\": container with ID starting with be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2 not found: ID does not exist" containerID="be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.613793 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2"} err="failed to get container status \"be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2\": rpc error: code = NotFound desc = could not find container \"be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2\": container with ID starting with be6476c27a25cf7f477b071f34e4a56900cbf355f24889fbcccb05be565369f2 not found: ID does not exist" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.613809 5017 scope.go:117] "RemoveContainer" containerID="b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace" Jan 29 06:55:00 crc kubenswrapper[5017]: E0129 06:55:00.614174 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace\": container with ID starting with b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace not found: ID does not exist" containerID="b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.614202 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace"} err="failed to get container status \"b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace\": rpc error: code = NotFound desc = could not find container \"b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace\": container with ID starting with b947ff5f2557edc6394698a1efada2d0dae38828cb6af859765ddd75dfb89ace not found: ID does not exist" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.614219 5017 scope.go:117] "RemoveContainer" containerID="26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e" Jan 29 06:55:00 crc kubenswrapper[5017]: E0129 06:55:00.614717 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e\": container with ID starting with 26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e not found: ID does not exist" containerID="26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.614750 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e"} err="failed to get container status \"26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e\": rpc error: code = NotFound desc = could not find container \"26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e\": container with ID starting with 26e6cf425c1aa8cfd28b7351b9b74063dd93c99d0092fc678b9a6803b4dc138e not found: ID does not exist" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.668967 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-scripts\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.669096 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.669124 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqjv\" (UniqueName: \"kubernetes.io/projected/900bcf84-a22f-4c90-8037-e3e9deaad6ec-kube-api-access-ztqjv\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.669156 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.669187 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-run-httpd\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.669224 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-config-data\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.669265 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-log-httpd\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.770986 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.771043 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqjv\" (UniqueName: \"kubernetes.io/projected/900bcf84-a22f-4c90-8037-e3e9deaad6ec-kube-api-access-ztqjv\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.771076 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.771104 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-run-httpd\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.771141 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-config-data\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.771166 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-log-httpd\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.771223 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-scripts\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.772219 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-run-httpd\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.773344 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-log-httpd\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.779492 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-config-data\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.780252 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-scripts\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.780450 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.791420 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqjv\" (UniqueName: \"kubernetes.io/projected/900bcf84-a22f-4c90-8037-e3e9deaad6ec-kube-api-access-ztqjv\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.793541 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " pod="openstack/ceilometer-0" Jan 29 06:55:00 crc kubenswrapper[5017]: I0129 06:55:00.908556 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.170497 5017 generic.go:334] "Generic (PLEG): container finished" podID="23be4105-cd73-4c7f-b967-8cac7cf8451d" containerID="3dfcd47ab3cac7b764900d044b1e708577dae1b232deab2ffb3281399ab171ba" exitCode=0 Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.170766 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fgmwl" event={"ID":"23be4105-cd73-4c7f-b967-8cac7cf8451d","Type":"ContainerDied","Data":"3dfcd47ab3cac7b764900d044b1e708577dae1b232deab2ffb3281399ab171ba"} Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.176159 5017 generic.go:334] "Generic (PLEG): container finished" podID="15d4a207-f6d2-48ce-9065-b3438a37b46d" containerID="b76b348c9d2423b65dbe5c304b35ec3d820a382c48b70c033fa945d81791a518" exitCode=0 Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.176240 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9727-account-create-update-khlxg" event={"ID":"15d4a207-f6d2-48ce-9065-b3438a37b46d","Type":"ContainerDied","Data":"b76b348c9d2423b65dbe5c304b35ec3d820a382c48b70c033fa945d81791a518"} Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.177974 5017 generic.go:334] "Generic (PLEG): container finished" podID="ecc785a3-2b30-4a73-b98a-1f6d405efa60" containerID="53c11f2f216b2c0277625e17b5979d133bf92078884d2553967dc21bb5b5ee56" exitCode=0 Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.178026 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0fca-account-create-update-pgcmp" event={"ID":"ecc785a3-2b30-4a73-b98a-1f6d405efa60","Type":"ContainerDied","Data":"53c11f2f216b2c0277625e17b5979d133bf92078884d2553967dc21bb5b5ee56"} Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.181978 5017 generic.go:334] "Generic (PLEG): container finished" podID="b67befda-4537-4dc6-bf3d-c7f971a7b825" containerID="d58e4b20a14bff37480389b7c1c562703d0a87748863f2677e04997a3a47605b" exitCode=0 Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.182055 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" event={"ID":"b67befda-4537-4dc6-bf3d-c7f971a7b825","Type":"ContainerDied","Data":"d58e4b20a14bff37480389b7c1c562703d0a87748863f2677e04997a3a47605b"} Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.194323 5017 generic.go:334] "Generic (PLEG): container finished" podID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerID="fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef" exitCode=143 Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.194595 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8548d8d696-gk4rx" event={"ID":"929c8bb1-1ca7-4593-b8f4-1e74f9702b57","Type":"ContainerDied","Data":"fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef"} Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.429943 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.705101 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.709126 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.800700 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m64x\" (UniqueName: \"kubernetes.io/projected/c44df6b3-8b1f-4004-9629-46412a17cbf7-kube-api-access-9m64x\") pod \"c44df6b3-8b1f-4004-9629-46412a17cbf7\" (UID: \"c44df6b3-8b1f-4004-9629-46412a17cbf7\") " Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.802939 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44df6b3-8b1f-4004-9629-46412a17cbf7-operator-scripts\") pod \"c44df6b3-8b1f-4004-9629-46412a17cbf7\" (UID: \"c44df6b3-8b1f-4004-9629-46412a17cbf7\") " Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.803180 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xlf6\" (UniqueName: \"kubernetes.io/projected/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-kube-api-access-9xlf6\") pod \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\" (UID: \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\") " Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.803545 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-operator-scripts\") pod \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\" (UID: \"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae\") " Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.803616 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44df6b3-8b1f-4004-9629-46412a17cbf7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c44df6b3-8b1f-4004-9629-46412a17cbf7" (UID: "c44df6b3-8b1f-4004-9629-46412a17cbf7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.804046 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae" (UID: "a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.804787 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.804894 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44df6b3-8b1f-4004-9629-46412a17cbf7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.811335 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44df6b3-8b1f-4004-9629-46412a17cbf7-kube-api-access-9m64x" (OuterVolumeSpecName: "kube-api-access-9m64x") pod "c44df6b3-8b1f-4004-9629-46412a17cbf7" (UID: "c44df6b3-8b1f-4004-9629-46412a17cbf7"). InnerVolumeSpecName "kube-api-access-9m64x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.811424 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-kube-api-access-9xlf6" (OuterVolumeSpecName: "kube-api-access-9xlf6") pod "a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae" (UID: "a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae"). InnerVolumeSpecName "kube-api-access-9xlf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.907102 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m64x\" (UniqueName: \"kubernetes.io/projected/c44df6b3-8b1f-4004-9629-46412a17cbf7-kube-api-access-9m64x\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:01 crc kubenswrapper[5017]: I0129 06:55:01.907153 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xlf6\" (UniqueName: \"kubernetes.io/projected/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae-kube-api-access-9xlf6\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.205774 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9p92z" event={"ID":"a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae","Type":"ContainerDied","Data":"e94673989230b5876eff68759dda4f5279eb10f95d3c5752c7b8f4c587a837ff"} Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.205805 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9p92z" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.205822 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e94673989230b5876eff68759dda4f5279eb10f95d3c5752c7b8f4c587a837ff" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.208204 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6lvps" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.208195 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6lvps" event={"ID":"c44df6b3-8b1f-4004-9629-46412a17cbf7","Type":"ContainerDied","Data":"e4bf07c894ebbd9e8a5617d0612efddff781ca113ac5b4251afb8a61e4c5209b"} Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.208522 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bf07c894ebbd9e8a5617d0612efddff781ca113ac5b4251afb8a61e4c5209b" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.209773 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerStarted","Data":"ddd9431312b2a62e76dd1a8aac3c0d4e69f2dd01ad798b11ecf9cd3977b7ddb1"} Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.337763 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="443e2637-af52-4409-aca6-c14cdc5c9767" path="/var/lib/kubelet/pods/443e2637-af52-4409-aca6-c14cdc5c9767/volumes" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.711399 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.728214 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jmrm\" (UniqueName: \"kubernetes.io/projected/ecc785a3-2b30-4a73-b98a-1f6d405efa60-kube-api-access-4jmrm\") pod \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\" (UID: \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\") " Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.728336 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecc785a3-2b30-4a73-b98a-1f6d405efa60-operator-scripts\") pod \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\" (UID: \"ecc785a3-2b30-4a73-b98a-1f6d405efa60\") " Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.728899 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecc785a3-2b30-4a73-b98a-1f6d405efa60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ecc785a3-2b30-4a73-b98a-1f6d405efa60" (UID: "ecc785a3-2b30-4a73-b98a-1f6d405efa60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.729575 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecc785a3-2b30-4a73-b98a-1f6d405efa60-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.749183 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc785a3-2b30-4a73-b98a-1f6d405efa60-kube-api-access-4jmrm" (OuterVolumeSpecName: "kube-api-access-4jmrm") pod "ecc785a3-2b30-4a73-b98a-1f6d405efa60" (UID: "ecc785a3-2b30-4a73-b98a-1f6d405efa60"). InnerVolumeSpecName "kube-api-access-4jmrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.831252 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jmrm\" (UniqueName: \"kubernetes.io/projected/ecc785a3-2b30-4a73-b98a-1f6d405efa60-kube-api-access-4jmrm\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.870898 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.892077 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.893295 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.932743 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23be4105-cd73-4c7f-b967-8cac7cf8451d-operator-scripts\") pod \"23be4105-cd73-4c7f-b967-8cac7cf8451d\" (UID: \"23be4105-cd73-4c7f-b967-8cac7cf8451d\") " Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.932834 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcgn2\" (UniqueName: \"kubernetes.io/projected/23be4105-cd73-4c7f-b967-8cac7cf8451d-kube-api-access-zcgn2\") pod \"23be4105-cd73-4c7f-b967-8cac7cf8451d\" (UID: \"23be4105-cd73-4c7f-b967-8cac7cf8451d\") " Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.932912 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92wx6\" (UniqueName: \"kubernetes.io/projected/15d4a207-f6d2-48ce-9065-b3438a37b46d-kube-api-access-92wx6\") pod \"15d4a207-f6d2-48ce-9065-b3438a37b46d\" (UID: \"15d4a207-f6d2-48ce-9065-b3438a37b46d\") " Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.933120 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqv2f\" (UniqueName: \"kubernetes.io/projected/b67befda-4537-4dc6-bf3d-c7f971a7b825-kube-api-access-pqv2f\") pod \"b67befda-4537-4dc6-bf3d-c7f971a7b825\" (UID: \"b67befda-4537-4dc6-bf3d-c7f971a7b825\") " Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.933152 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67befda-4537-4dc6-bf3d-c7f971a7b825-operator-scripts\") pod \"b67befda-4537-4dc6-bf3d-c7f971a7b825\" (UID: \"b67befda-4537-4dc6-bf3d-c7f971a7b825\") " Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.933186 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d4a207-f6d2-48ce-9065-b3438a37b46d-operator-scripts\") pod \"15d4a207-f6d2-48ce-9065-b3438a37b46d\" (UID: \"15d4a207-f6d2-48ce-9065-b3438a37b46d\") " Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.939070 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15d4a207-f6d2-48ce-9065-b3438a37b46d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15d4a207-f6d2-48ce-9065-b3438a37b46d" (UID: "15d4a207-f6d2-48ce-9065-b3438a37b46d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.939482 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67befda-4537-4dc6-bf3d-c7f971a7b825-kube-api-access-pqv2f" (OuterVolumeSpecName: "kube-api-access-pqv2f") pod "b67befda-4537-4dc6-bf3d-c7f971a7b825" (UID: "b67befda-4537-4dc6-bf3d-c7f971a7b825"). InnerVolumeSpecName "kube-api-access-pqv2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.939619 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67befda-4537-4dc6-bf3d-c7f971a7b825-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b67befda-4537-4dc6-bf3d-c7f971a7b825" (UID: "b67befda-4537-4dc6-bf3d-c7f971a7b825"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.939812 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d4a207-f6d2-48ce-9065-b3438a37b46d-kube-api-access-92wx6" (OuterVolumeSpecName: "kube-api-access-92wx6") pod "15d4a207-f6d2-48ce-9065-b3438a37b46d" (UID: "15d4a207-f6d2-48ce-9065-b3438a37b46d"). InnerVolumeSpecName "kube-api-access-92wx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.940735 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23be4105-cd73-4c7f-b967-8cac7cf8451d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23be4105-cd73-4c7f-b967-8cac7cf8451d" (UID: "23be4105-cd73-4c7f-b967-8cac7cf8451d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:02 crc kubenswrapper[5017]: I0129 06:55:02.946560 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23be4105-cd73-4c7f-b967-8cac7cf8451d-kube-api-access-zcgn2" (OuterVolumeSpecName: "kube-api-access-zcgn2") pod "23be4105-cd73-4c7f-b967-8cac7cf8451d" (UID: "23be4105-cd73-4c7f-b967-8cac7cf8451d"). InnerVolumeSpecName "kube-api-access-zcgn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.035869 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqv2f\" (UniqueName: \"kubernetes.io/projected/b67befda-4537-4dc6-bf3d-c7f971a7b825-kube-api-access-pqv2f\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.035906 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67befda-4537-4dc6-bf3d-c7f971a7b825-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.035915 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15d4a207-f6d2-48ce-9065-b3438a37b46d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.035928 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23be4105-cd73-4c7f-b967-8cac7cf8451d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.035937 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcgn2\" (UniqueName: \"kubernetes.io/projected/23be4105-cd73-4c7f-b967-8cac7cf8451d-kube-api-access-zcgn2\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.035946 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92wx6\" (UniqueName: \"kubernetes.io/projected/15d4a207-f6d2-48ce-9065-b3438a37b46d-kube-api-access-92wx6\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.232592 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" event={"ID":"b67befda-4537-4dc6-bf3d-c7f971a7b825","Type":"ContainerDied","Data":"a380ee156bc511128bbb12c77d4ebfc34e78eb5a5c966ea47298180d445b47b9"} Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.232642 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a380ee156bc511128bbb12c77d4ebfc34e78eb5a5c966ea47298180d445b47b9" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.232738 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-cx6sh" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.236169 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fgmwl" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.241808 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9727-account-create-update-khlxg" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.243971 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0fca-account-create-update-pgcmp" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.236034 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fgmwl" event={"ID":"23be4105-cd73-4c7f-b967-8cac7cf8451d","Type":"ContainerDied","Data":"a0e68008651026709d252df4dca5b7a4d831689cd191ed66705bbbfb6bbe81ed"} Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.245743 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e68008651026709d252df4dca5b7a4d831689cd191ed66705bbbfb6bbe81ed" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.245767 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerStarted","Data":"ae2bc77107a5035da262cf4eadafc16c3049f80622a656702be5b736d2c8be91"} Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.245789 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerStarted","Data":"b5f466fc8e3756f7ec6b3045630f55deb681918dbfc6655e03c3edd97674d1da"} Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.245802 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9727-account-create-update-khlxg" event={"ID":"15d4a207-f6d2-48ce-9065-b3438a37b46d","Type":"ContainerDied","Data":"97d6a9faf74e465fce9ec1a032aa740de09b27ad0dc8405f6b9111c7aaeac3e6"} Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.245815 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d6a9faf74e465fce9ec1a032aa740de09b27ad0dc8405f6b9111c7aaeac3e6" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.245824 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0fca-account-create-update-pgcmp" event={"ID":"ecc785a3-2b30-4a73-b98a-1f6d405efa60","Type":"ContainerDied","Data":"a0e1635187bae42930c7ff3b46d0d99b5ffd89cc8c4692dbb2392120a6137be8"} Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.245835 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e1635187bae42930c7ff3b46d0d99b5ffd89cc8c4692dbb2392120a6137be8" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.342405 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.342467 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.374573 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.404050 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 06:55:03 crc kubenswrapper[5017]: E0129 06:55:03.896833 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod929c8bb1_1ca7_4593_b8f4_1e74f9702b57.slice/crio-8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod929c8bb1_1ca7_4593_b8f4_1e74f9702b57.slice/crio-conmon-8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.937830 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.954811 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-config-data\") pod \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.955038 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-logs\") pod \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.955101 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-combined-ca-bundle\") pod \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.955162 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-public-tls-certs\") pod \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.955215 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-internal-tls-certs\") pod \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.955258 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vq7k\" (UniqueName: \"kubernetes.io/projected/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-kube-api-access-2vq7k\") pod \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.955286 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-scripts\") pod \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\" (UID: \"929c8bb1-1ca7-4593-b8f4-1e74f9702b57\") " Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.957194 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-logs" (OuterVolumeSpecName: "logs") pod "929c8bb1-1ca7-4593-b8f4-1e74f9702b57" (UID: "929c8bb1-1ca7-4593-b8f4-1e74f9702b57"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.973456 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-scripts" (OuterVolumeSpecName: "scripts") pod "929c8bb1-1ca7-4593-b8f4-1e74f9702b57" (UID: "929c8bb1-1ca7-4593-b8f4-1e74f9702b57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:03 crc kubenswrapper[5017]: I0129 06:55:03.976832 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-kube-api-access-2vq7k" (OuterVolumeSpecName: "kube-api-access-2vq7k") pod "929c8bb1-1ca7-4593-b8f4-1e74f9702b57" (UID: "929c8bb1-1ca7-4593-b8f4-1e74f9702b57"). InnerVolumeSpecName "kube-api-access-2vq7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.018835 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "929c8bb1-1ca7-4593-b8f4-1e74f9702b57" (UID: "929c8bb1-1ca7-4593-b8f4-1e74f9702b57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.054820 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-config-data" (OuterVolumeSpecName: "config-data") pod "929c8bb1-1ca7-4593-b8f4-1e74f9702b57" (UID: "929c8bb1-1ca7-4593-b8f4-1e74f9702b57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.058217 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vq7k\" (UniqueName: \"kubernetes.io/projected/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-kube-api-access-2vq7k\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.058248 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.058258 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.058268 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.058277 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.066594 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "929c8bb1-1ca7-4593-b8f4-1e74f9702b57" (UID: "929c8bb1-1ca7-4593-b8f4-1e74f9702b57"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.089089 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "929c8bb1-1ca7-4593-b8f4-1e74f9702b57" (UID: "929c8bb1-1ca7-4593-b8f4-1e74f9702b57"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.160695 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.160749 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/929c8bb1-1ca7-4593-b8f4-1e74f9702b57-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.261684 5017 generic.go:334] "Generic (PLEG): container finished" podID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerID="8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8" exitCode=0 Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.261784 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8548d8d696-gk4rx" event={"ID":"929c8bb1-1ca7-4593-b8f4-1e74f9702b57","Type":"ContainerDied","Data":"8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8"} Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.263579 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8548d8d696-gk4rx" event={"ID":"929c8bb1-1ca7-4593-b8f4-1e74f9702b57","Type":"ContainerDied","Data":"c8ca0ce3ccac7f5529adbba1de049ad01dddc6450e36ccf4a04a79bb0c68fdd8"} Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.263664 5017 scope.go:117] "RemoveContainer" containerID="8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.261800 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8548d8d696-gk4rx" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.267124 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerStarted","Data":"db3c0ae921866f4f767f56fe83177ed343e9edef6bfbe98fd0d0638f39d83126"} Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.267749 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.267789 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.299091 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.299148 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.355238 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8548d8d696-gk4rx"] Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.355976 5017 scope.go:117] "RemoveContainer" containerID="fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.357697 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8548d8d696-gk4rx"] Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.359527 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.360637 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.394526 5017 scope.go:117] "RemoveContainer" containerID="8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8" Jan 29 06:55:04 crc kubenswrapper[5017]: E0129 06:55:04.399621 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8\": container with ID starting with 8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8 not found: ID does not exist" containerID="8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.399701 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8"} err="failed to get container status \"8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8\": rpc error: code = NotFound desc = could not find container \"8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8\": container with ID starting with 8b94ddb59bba453b2407f45b2832bf1bb4549afffb1634d0691aab252bffafe8 not found: ID does not exist" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.399737 5017 scope.go:117] "RemoveContainer" containerID="fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef" Jan 29 06:55:04 crc kubenswrapper[5017]: E0129 06:55:04.400320 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef\": container with ID starting with fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef not found: ID does not exist" containerID="fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef" Jan 29 06:55:04 crc kubenswrapper[5017]: I0129 06:55:04.400342 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef"} err="failed to get container status \"fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef\": rpc error: code = NotFound desc = could not find container \"fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef\": container with ID starting with fe46baa8e6053563c1ffb0c393499ed3ca638f7d058c3634aa1ecaff13ec1bef not found: ID does not exist" Jan 29 06:55:05 crc kubenswrapper[5017]: I0129 06:55:05.292667 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 06:55:05 crc kubenswrapper[5017]: I0129 06:55:05.292714 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 06:55:06 crc kubenswrapper[5017]: I0129 06:55:06.309154 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerStarted","Data":"b725d72e17f482b1a3a5bfe561a3570706c7df023022cbe4f8473a35ca875bd7"} Jan 29 06:55:06 crc kubenswrapper[5017]: I0129 06:55:06.309813 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 06:55:06 crc kubenswrapper[5017]: I0129 06:55:06.336760 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" path="/var/lib/kubelet/pods/929c8bb1-1ca7-4593-b8f4-1e74f9702b57/volumes" Jan 29 06:55:06 crc kubenswrapper[5017]: I0129 06:55:06.354485 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.051811154 podStartE2EDuration="6.354463792s" podCreationTimestamp="2026-01-29 06:55:00 +0000 UTC" firstStartedPulling="2026-01-29 06:55:01.46138668 +0000 UTC m=+1187.835834300" lastFinishedPulling="2026-01-29 06:55:05.764039328 +0000 UTC m=+1192.138486938" observedRunningTime="2026-01-29 06:55:06.333583326 +0000 UTC m=+1192.708030946" watchObservedRunningTime="2026-01-29 06:55:06.354463792 +0000 UTC m=+1192.728911402" Jan 29 06:55:06 crc kubenswrapper[5017]: I0129 06:55:06.434905 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 06:55:06 crc kubenswrapper[5017]: I0129 06:55:06.435249 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.391238 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.391725 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.393237 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.969655 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dn6j"] Jan 29 06:55:07 crc kubenswrapper[5017]: E0129 06:55:07.970131 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae" containerName="mariadb-database-create" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970151 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae" containerName="mariadb-database-create" Jan 29 06:55:07 crc kubenswrapper[5017]: E0129 06:55:07.970171 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerName="placement-api" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970179 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerName="placement-api" Jan 29 06:55:07 crc kubenswrapper[5017]: E0129 06:55:07.970189 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44df6b3-8b1f-4004-9629-46412a17cbf7" containerName="mariadb-database-create" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970195 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44df6b3-8b1f-4004-9629-46412a17cbf7" containerName="mariadb-database-create" Jan 29 06:55:07 crc kubenswrapper[5017]: E0129 06:55:07.970221 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc785a3-2b30-4a73-b98a-1f6d405efa60" containerName="mariadb-account-create-update" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970227 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc785a3-2b30-4a73-b98a-1f6d405efa60" containerName="mariadb-account-create-update" Jan 29 06:55:07 crc kubenswrapper[5017]: E0129 06:55:07.970240 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerName="placement-log" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970247 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerName="placement-log" Jan 29 06:55:07 crc kubenswrapper[5017]: E0129 06:55:07.970257 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67befda-4537-4dc6-bf3d-c7f971a7b825" containerName="mariadb-account-create-update" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970265 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67befda-4537-4dc6-bf3d-c7f971a7b825" containerName="mariadb-account-create-update" Jan 29 06:55:07 crc kubenswrapper[5017]: E0129 06:55:07.970279 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d4a207-f6d2-48ce-9065-b3438a37b46d" containerName="mariadb-account-create-update" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970285 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d4a207-f6d2-48ce-9065-b3438a37b46d" containerName="mariadb-account-create-update" Jan 29 06:55:07 crc kubenswrapper[5017]: E0129 06:55:07.970302 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23be4105-cd73-4c7f-b967-8cac7cf8451d" containerName="mariadb-database-create" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970308 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="23be4105-cd73-4c7f-b967-8cac7cf8451d" containerName="mariadb-database-create" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970492 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d4a207-f6d2-48ce-9065-b3438a37b46d" containerName="mariadb-account-create-update" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970507 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerName="placement-log" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970520 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67befda-4537-4dc6-bf3d-c7f971a7b825" containerName="mariadb-account-create-update" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970530 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="929c8bb1-1ca7-4593-b8f4-1e74f9702b57" containerName="placement-api" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970539 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="23be4105-cd73-4c7f-b967-8cac7cf8451d" containerName="mariadb-database-create" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970551 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae" containerName="mariadb-database-create" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970557 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc785a3-2b30-4a73-b98a-1f6d405efa60" containerName="mariadb-account-create-update" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.970566 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44df6b3-8b1f-4004-9629-46412a17cbf7" containerName="mariadb-database-create" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.971326 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:07 crc kubenswrapper[5017]: I0129 06:55:07.980115 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dn6j"] Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.015377 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.015676 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.015988 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4jmqf" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.052488 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6pk\" (UniqueName: \"kubernetes.io/projected/f541766b-7fae-4f87-8c2b-97e269d15c84-kube-api-access-pp6pk\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.052551 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.052607 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-scripts\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.052643 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-config-data\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.155113 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6pk\" (UniqueName: \"kubernetes.io/projected/f541766b-7fae-4f87-8c2b-97e269d15c84-kube-api-access-pp6pk\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.155173 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.155234 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-scripts\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.155268 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-config-data\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.164627 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.166414 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-scripts\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.169694 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-config-data\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.176225 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6pk\" (UniqueName: \"kubernetes.io/projected/f541766b-7fae-4f87-8c2b-97e269d15c84-kube-api-access-pp6pk\") pod \"nova-cell0-conductor-db-sync-8dn6j\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.336512 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:08 crc kubenswrapper[5017]: I0129 06:55:08.848819 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dn6j"] Jan 29 06:55:09 crc kubenswrapper[5017]: I0129 06:55:09.344571 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8dn6j" event={"ID":"f541766b-7fae-4f87-8c2b-97e269d15c84","Type":"ContainerStarted","Data":"cc123f7f4fa8bde30bcf529442c3c8b60ef55b85234644fce38f61b81488c345"} Jan 29 06:55:13 crc kubenswrapper[5017]: I0129 06:55:13.711841 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:13 crc kubenswrapper[5017]: I0129 06:55:13.712678 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="ceilometer-central-agent" containerID="cri-o://b5f466fc8e3756f7ec6b3045630f55deb681918dbfc6655e03c3edd97674d1da" gracePeriod=30 Jan 29 06:55:13 crc kubenswrapper[5017]: I0129 06:55:13.713278 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="proxy-httpd" containerID="cri-o://b725d72e17f482b1a3a5bfe561a3570706c7df023022cbe4f8473a35ca875bd7" gracePeriod=30 Jan 29 06:55:13 crc kubenswrapper[5017]: I0129 06:55:13.713336 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="sg-core" containerID="cri-o://db3c0ae921866f4f767f56fe83177ed343e9edef6bfbe98fd0d0638f39d83126" gracePeriod=30 Jan 29 06:55:13 crc kubenswrapper[5017]: I0129 06:55:13.713375 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="ceilometer-notification-agent" containerID="cri-o://ae2bc77107a5035da262cf4eadafc16c3049f80622a656702be5b736d2c8be91" gracePeriod=30 Jan 29 06:55:14 crc kubenswrapper[5017]: I0129 06:55:14.444514 5017 generic.go:334] "Generic (PLEG): container finished" podID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerID="b725d72e17f482b1a3a5bfe561a3570706c7df023022cbe4f8473a35ca875bd7" exitCode=0 Jan 29 06:55:14 crc kubenswrapper[5017]: I0129 06:55:14.445545 5017 generic.go:334] "Generic (PLEG): container finished" podID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerID="db3c0ae921866f4f767f56fe83177ed343e9edef6bfbe98fd0d0638f39d83126" exitCode=2 Jan 29 06:55:14 crc kubenswrapper[5017]: I0129 06:55:14.445560 5017 generic.go:334] "Generic (PLEG): container finished" podID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerID="b5f466fc8e3756f7ec6b3045630f55deb681918dbfc6655e03c3edd97674d1da" exitCode=0 Jan 29 06:55:14 crc kubenswrapper[5017]: I0129 06:55:14.444593 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerDied","Data":"b725d72e17f482b1a3a5bfe561a3570706c7df023022cbe4f8473a35ca875bd7"} Jan 29 06:55:14 crc kubenswrapper[5017]: I0129 06:55:14.445622 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerDied","Data":"db3c0ae921866f4f767f56fe83177ed343e9edef6bfbe98fd0d0638f39d83126"} Jan 29 06:55:14 crc kubenswrapper[5017]: I0129 06:55:14.445647 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerDied","Data":"b5f466fc8e3756f7ec6b3045630f55deb681918dbfc6655e03c3edd97674d1da"} Jan 29 06:55:15 crc kubenswrapper[5017]: I0129 06:55:15.462248 5017 generic.go:334] "Generic (PLEG): container finished" podID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerID="ae2bc77107a5035da262cf4eadafc16c3049f80622a656702be5b736d2c8be91" exitCode=0 Jan 29 06:55:15 crc kubenswrapper[5017]: I0129 06:55:15.462316 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerDied","Data":"ae2bc77107a5035da262cf4eadafc16c3049f80622a656702be5b736d2c8be91"} Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.469929 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.478402 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8dn6j" event={"ID":"f541766b-7fae-4f87-8c2b-97e269d15c84","Type":"ContainerStarted","Data":"55b4fe46b945125911fbc023ac6464f7526116f37457a6150cc275fbf8d698ab"} Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.482227 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"900bcf84-a22f-4c90-8037-e3e9deaad6ec","Type":"ContainerDied","Data":"ddd9431312b2a62e76dd1a8aac3c0d4e69f2dd01ad798b11ecf9cd3977b7ddb1"} Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.482286 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.482305 5017 scope.go:117] "RemoveContainer" containerID="b725d72e17f482b1a3a5bfe561a3570706c7df023022cbe4f8473a35ca875bd7" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.527917 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-8dn6j" podStartSLOduration=2.189357605 podStartE2EDuration="9.527892651s" podCreationTimestamp="2026-01-29 06:55:07 +0000 UTC" firstStartedPulling="2026-01-29 06:55:08.864031099 +0000 UTC m=+1195.238478719" lastFinishedPulling="2026-01-29 06:55:16.202566155 +0000 UTC m=+1202.577013765" observedRunningTime="2026-01-29 06:55:16.52543853 +0000 UTC m=+1202.899886160" watchObservedRunningTime="2026-01-29 06:55:16.527892651 +0000 UTC m=+1202.902340271" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.538857 5017 scope.go:117] "RemoveContainer" containerID="db3c0ae921866f4f767f56fe83177ed343e9edef6bfbe98fd0d0638f39d83126" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.556743 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-sg-core-conf-yaml\") pod \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.556909 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-log-httpd\") pod \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.556989 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-config-data\") pod \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.557153 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-scripts\") pod \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.557240 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-combined-ca-bundle\") pod \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.557303 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-run-httpd\") pod \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.557353 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztqjv\" (UniqueName: \"kubernetes.io/projected/900bcf84-a22f-4c90-8037-e3e9deaad6ec-kube-api-access-ztqjv\") pod \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\" (UID: \"900bcf84-a22f-4c90-8037-e3e9deaad6ec\") " Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.558172 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "900bcf84-a22f-4c90-8037-e3e9deaad6ec" (UID: "900bcf84-a22f-4c90-8037-e3e9deaad6ec"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.558764 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "900bcf84-a22f-4c90-8037-e3e9deaad6ec" (UID: "900bcf84-a22f-4c90-8037-e3e9deaad6ec"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.562464 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-scripts" (OuterVolumeSpecName: "scripts") pod "900bcf84-a22f-4c90-8037-e3e9deaad6ec" (UID: "900bcf84-a22f-4c90-8037-e3e9deaad6ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.563126 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900bcf84-a22f-4c90-8037-e3e9deaad6ec-kube-api-access-ztqjv" (OuterVolumeSpecName: "kube-api-access-ztqjv") pod "900bcf84-a22f-4c90-8037-e3e9deaad6ec" (UID: "900bcf84-a22f-4c90-8037-e3e9deaad6ec"). InnerVolumeSpecName "kube-api-access-ztqjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.564029 5017 scope.go:117] "RemoveContainer" containerID="ae2bc77107a5035da262cf4eadafc16c3049f80622a656702be5b736d2c8be91" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.596678 5017 scope.go:117] "RemoveContainer" containerID="b5f466fc8e3756f7ec6b3045630f55deb681918dbfc6655e03c3edd97674d1da" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.606155 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "900bcf84-a22f-4c90-8037-e3e9deaad6ec" (UID: "900bcf84-a22f-4c90-8037-e3e9deaad6ec"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.655816 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "900bcf84-a22f-4c90-8037-e3e9deaad6ec" (UID: "900bcf84-a22f-4c90-8037-e3e9deaad6ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.659703 5017 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.659734 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.659745 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.659756 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.659765 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/900bcf84-a22f-4c90-8037-e3e9deaad6ec-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.659775 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztqjv\" (UniqueName: \"kubernetes.io/projected/900bcf84-a22f-4c90-8037-e3e9deaad6ec-kube-api-access-ztqjv\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.679551 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-config-data" (OuterVolumeSpecName: "config-data") pod "900bcf84-a22f-4c90-8037-e3e9deaad6ec" (UID: "900bcf84-a22f-4c90-8037-e3e9deaad6ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.761412 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900bcf84-a22f-4c90-8037-e3e9deaad6ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.880790 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.892087 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.937859 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:16 crc kubenswrapper[5017]: E0129 06:55:16.940265 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="ceilometer-central-agent" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.940293 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="ceilometer-central-agent" Jan 29 06:55:16 crc kubenswrapper[5017]: E0129 06:55:16.940340 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="proxy-httpd" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.940351 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="proxy-httpd" Jan 29 06:55:16 crc kubenswrapper[5017]: E0129 06:55:16.942129 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="sg-core" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.942152 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="sg-core" Jan 29 06:55:16 crc kubenswrapper[5017]: E0129 06:55:16.942282 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="ceilometer-notification-agent" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.942295 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="ceilometer-notification-agent" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.945480 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="ceilometer-notification-agent" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.945916 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="sg-core" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.946013 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="proxy-httpd" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.946401 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" containerName="ceilometer-central-agent" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.952295 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.957075 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.957505 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.965619 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.965711 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-scripts\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.965758 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzw5s\" (UniqueName: \"kubernetes.io/projected/95cfbfdc-d831-48a6-9de9-5511bd7587d9-kube-api-access-nzw5s\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.965847 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-run-httpd\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.966124 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.966198 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-log-httpd\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.966275 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-config-data\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:16 crc kubenswrapper[5017]: I0129 06:55:16.967199 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.068679 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.068805 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-scripts\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.068856 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzw5s\" (UniqueName: \"kubernetes.io/projected/95cfbfdc-d831-48a6-9de9-5511bd7587d9-kube-api-access-nzw5s\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.068884 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-run-httpd\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.068958 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.068996 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-log-httpd\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.069046 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-config-data\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.070716 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-log-httpd\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.070789 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-run-httpd\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.076171 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.076361 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.076680 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-config-data\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.088052 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-scripts\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.089227 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzw5s\" (UniqueName: \"kubernetes.io/projected/95cfbfdc-d831-48a6-9de9-5511bd7587d9-kube-api-access-nzw5s\") pod \"ceilometer-0\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.275181 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:17 crc kubenswrapper[5017]: I0129 06:55:17.807792 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:17 crc kubenswrapper[5017]: W0129 06:55:17.826338 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95cfbfdc_d831_48a6_9de9_5511bd7587d9.slice/crio-f87c1308c09739af8816abd0e91a4011153bbbc62a743ca8ef9db06f26446dbc WatchSource:0}: Error finding container f87c1308c09739af8816abd0e91a4011153bbbc62a743ca8ef9db06f26446dbc: Status 404 returned error can't find the container with id f87c1308c09739af8816abd0e91a4011153bbbc62a743ca8ef9db06f26446dbc Jan 29 06:55:18 crc kubenswrapper[5017]: I0129 06:55:18.333336 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900bcf84-a22f-4c90-8037-e3e9deaad6ec" path="/var/lib/kubelet/pods/900bcf84-a22f-4c90-8037-e3e9deaad6ec/volumes" Jan 29 06:55:18 crc kubenswrapper[5017]: I0129 06:55:18.514730 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerStarted","Data":"5a5553bca56453bc7dd383b571e644904570a6ff37646732ab46bfc63d4356bb"} Jan 29 06:55:18 crc kubenswrapper[5017]: I0129 06:55:18.514782 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerStarted","Data":"f87c1308c09739af8816abd0e91a4011153bbbc62a743ca8ef9db06f26446dbc"} Jan 29 06:55:19 crc kubenswrapper[5017]: I0129 06:55:19.524967 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerStarted","Data":"f77b5ae493cc98d9d20046895a4f54e4268e388e3d9a9a199fa78ef8cda58123"} Jan 29 06:55:20 crc kubenswrapper[5017]: I0129 06:55:20.538275 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerStarted","Data":"6196da4d03910bc572acc5cc95df2d7dcae2ef492670dad5d9b79065497f7322"} Jan 29 06:55:22 crc kubenswrapper[5017]: I0129 06:55:22.563446 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerStarted","Data":"d6f73092504fdd82e85c07a381c7b43721cdbd10af54bf12d28c31365e79fc0b"} Jan 29 06:55:22 crc kubenswrapper[5017]: I0129 06:55:22.564251 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 06:55:22 crc kubenswrapper[5017]: I0129 06:55:22.602725 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.872463482 podStartE2EDuration="6.602697152s" podCreationTimestamp="2026-01-29 06:55:16 +0000 UTC" firstStartedPulling="2026-01-29 06:55:17.830887505 +0000 UTC m=+1204.205335115" lastFinishedPulling="2026-01-29 06:55:21.561121175 +0000 UTC m=+1207.935568785" observedRunningTime="2026-01-29 06:55:22.582494984 +0000 UTC m=+1208.956942624" watchObservedRunningTime="2026-01-29 06:55:22.602697152 +0000 UTC m=+1208.977144762" Jan 29 06:55:28 crc kubenswrapper[5017]: I0129 06:55:28.641294 5017 generic.go:334] "Generic (PLEG): container finished" podID="f541766b-7fae-4f87-8c2b-97e269d15c84" containerID="55b4fe46b945125911fbc023ac6464f7526116f37457a6150cc275fbf8d698ab" exitCode=0 Jan 29 06:55:28 crc kubenswrapper[5017]: I0129 06:55:28.641419 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8dn6j" event={"ID":"f541766b-7fae-4f87-8c2b-97e269d15c84","Type":"ContainerDied","Data":"55b4fe46b945125911fbc023ac6464f7526116f37457a6150cc275fbf8d698ab"} Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.042412 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.207400 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-combined-ca-bundle\") pod \"f541766b-7fae-4f87-8c2b-97e269d15c84\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.207531 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp6pk\" (UniqueName: \"kubernetes.io/projected/f541766b-7fae-4f87-8c2b-97e269d15c84-kube-api-access-pp6pk\") pod \"f541766b-7fae-4f87-8c2b-97e269d15c84\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.207567 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-scripts\") pod \"f541766b-7fae-4f87-8c2b-97e269d15c84\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.207710 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-config-data\") pod \"f541766b-7fae-4f87-8c2b-97e269d15c84\" (UID: \"f541766b-7fae-4f87-8c2b-97e269d15c84\") " Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.215602 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-scripts" (OuterVolumeSpecName: "scripts") pod "f541766b-7fae-4f87-8c2b-97e269d15c84" (UID: "f541766b-7fae-4f87-8c2b-97e269d15c84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.222594 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f541766b-7fae-4f87-8c2b-97e269d15c84-kube-api-access-pp6pk" (OuterVolumeSpecName: "kube-api-access-pp6pk") pod "f541766b-7fae-4f87-8c2b-97e269d15c84" (UID: "f541766b-7fae-4f87-8c2b-97e269d15c84"). InnerVolumeSpecName "kube-api-access-pp6pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.241290 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f541766b-7fae-4f87-8c2b-97e269d15c84" (UID: "f541766b-7fae-4f87-8c2b-97e269d15c84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.249191 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-config-data" (OuterVolumeSpecName: "config-data") pod "f541766b-7fae-4f87-8c2b-97e269d15c84" (UID: "f541766b-7fae-4f87-8c2b-97e269d15c84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.311028 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.311092 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp6pk\" (UniqueName: \"kubernetes.io/projected/f541766b-7fae-4f87-8c2b-97e269d15c84-kube-api-access-pp6pk\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.311112 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.311124 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f541766b-7fae-4f87-8c2b-97e269d15c84-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.670846 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8dn6j" event={"ID":"f541766b-7fae-4f87-8c2b-97e269d15c84","Type":"ContainerDied","Data":"cc123f7f4fa8bde30bcf529442c3c8b60ef55b85234644fce38f61b81488c345"} Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.670904 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc123f7f4fa8bde30bcf529442c3c8b60ef55b85234644fce38f61b81488c345" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.671024 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8dn6j" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.775694 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 06:55:30 crc kubenswrapper[5017]: E0129 06:55:30.776177 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f541766b-7fae-4f87-8c2b-97e269d15c84" containerName="nova-cell0-conductor-db-sync" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.776198 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f541766b-7fae-4f87-8c2b-97e269d15c84" containerName="nova-cell0-conductor-db-sync" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.776388 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f541766b-7fae-4f87-8c2b-97e269d15c84" containerName="nova-cell0-conductor-db-sync" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.777184 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.781493 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4jmqf" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.781706 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.800246 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.822686 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.822864 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.822911 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b62t\" (UniqueName: \"kubernetes.io/projected/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-kube-api-access-7b62t\") pod \"nova-cell0-conductor-0\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.925239 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.925337 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b62t\" (UniqueName: \"kubernetes.io/projected/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-kube-api-access-7b62t\") pod \"nova-cell0-conductor-0\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.925543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.932688 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.934214 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:30 crc kubenswrapper[5017]: I0129 06:55:30.946847 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b62t\" (UniqueName: \"kubernetes.io/projected/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-kube-api-access-7b62t\") pod \"nova-cell0-conductor-0\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:31 crc kubenswrapper[5017]: I0129 06:55:31.106928 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:31 crc kubenswrapper[5017]: I0129 06:55:31.450799 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 06:55:31 crc kubenswrapper[5017]: W0129 06:55:31.457203 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa7337b_7686_4fd2_9c52_6b76f9f3a3b1.slice/crio-7bef3d53b020be2910231ae36677fcb80cae47c1e8a1f27f1cc054053fbdac74 WatchSource:0}: Error finding container 7bef3d53b020be2910231ae36677fcb80cae47c1e8a1f27f1cc054053fbdac74: Status 404 returned error can't find the container with id 7bef3d53b020be2910231ae36677fcb80cae47c1e8a1f27f1cc054053fbdac74 Jan 29 06:55:31 crc kubenswrapper[5017]: I0129 06:55:31.682056 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1","Type":"ContainerStarted","Data":"7bef3d53b020be2910231ae36677fcb80cae47c1e8a1f27f1cc054053fbdac74"} Jan 29 06:55:32 crc kubenswrapper[5017]: I0129 06:55:32.707569 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1","Type":"ContainerStarted","Data":"86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547"} Jan 29 06:55:32 crc kubenswrapper[5017]: I0129 06:55:32.708588 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:32 crc kubenswrapper[5017]: I0129 06:55:32.738245 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.738219286 podStartE2EDuration="2.738219286s" podCreationTimestamp="2026-01-29 06:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:32.727110262 +0000 UTC m=+1219.101557912" watchObservedRunningTime="2026-01-29 06:55:32.738219286 +0000 UTC m=+1219.112666896" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.135186 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.621039 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-wxns2"] Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.622670 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.634321 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.634528 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.650592 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wxns2"] Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.674272 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.674705 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-scripts\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.674826 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm2d8\" (UniqueName: \"kubernetes.io/projected/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-kube-api-access-sm2d8\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.674974 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-config-data\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.778833 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-scripts\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.779206 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm2d8\" (UniqueName: \"kubernetes.io/projected/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-kube-api-access-sm2d8\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.779348 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-config-data\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.779467 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.784460 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.791006 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.797364 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.805769 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-config-data\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.829893 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.854435 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.860850 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-scripts\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.861369 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm2d8\" (UniqueName: \"kubernetes.io/projected/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-kube-api-access-sm2d8\") pod \"nova-cell0-cell-mapping-wxns2\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.881621 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.881732 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwgzb\" (UniqueName: \"kubernetes.io/projected/a35e9ed1-19c4-4412-b252-ead2940ab008-kube-api-access-gwgzb\") pod \"nova-cell1-novncproxy-0\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.881756 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.934050 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.935941 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.939433 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.949645 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.951450 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.960440 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.962043 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987050 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a68bb7-8566-4ef7-9749-5578ae481f94-logs\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987121 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwgzb\" (UniqueName: \"kubernetes.io/projected/a35e9ed1-19c4-4412-b252-ead2940ab008-kube-api-access-gwgzb\") pod \"nova-cell1-novncproxy-0\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987148 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987175 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987201 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68wm\" (UniqueName: \"kubernetes.io/projected/76a68bb7-8566-4ef7-9749-5578ae481f94-kube-api-access-x68wm\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987274 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-config-data\") pod \"nova-scheduler-0\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987359 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-config-data\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987390 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987437 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:36 crc kubenswrapper[5017]: I0129 06:55:36.987462 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm542\" (UniqueName: \"kubernetes.io/projected/db399f6d-aabe-4423-840e-56d457b5dcb4-kube-api-access-vm542\") pod \"nova-scheduler-0\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.011692 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.011773 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.043295 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.048667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwgzb\" (UniqueName: \"kubernetes.io/projected/a35e9ed1-19c4-4412-b252-ead2940ab008-kube-api-access-gwgzb\") pod \"nova-cell1-novncproxy-0\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.057025 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.099753 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-config-data\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.099858 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.099888 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm542\" (UniqueName: \"kubernetes.io/projected/db399f6d-aabe-4423-840e-56d457b5dcb4-kube-api-access-vm542\") pod \"nova-scheduler-0\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.100017 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a68bb7-8566-4ef7-9749-5578ae481f94-logs\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.100057 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.100079 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x68wm\" (UniqueName: \"kubernetes.io/projected/76a68bb7-8566-4ef7-9749-5578ae481f94-kube-api-access-x68wm\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.100202 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-config-data\") pod \"nova-scheduler-0\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.107339 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a68bb7-8566-4ef7-9749-5578ae481f94-logs\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.126842 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-config-data\") pod \"nova-scheduler-0\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.152652 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.170037 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm542\" (UniqueName: \"kubernetes.io/projected/db399f6d-aabe-4423-840e-56d457b5dcb4-kube-api-access-vm542\") pod \"nova-scheduler-0\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.170545 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.172762 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-config-data\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.180694 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x68wm\" (UniqueName: \"kubernetes.io/projected/76a68bb7-8566-4ef7-9749-5578ae481f94-kube-api-access-x68wm\") pod \"nova-metadata-0\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " pod="openstack/nova-metadata-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.190009 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-mxw5x"] Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.191859 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.205411 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.206124 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.216732 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.231509 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.233908 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.257169 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-mxw5x"] Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.299257 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304148 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304204 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304242 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-config-data\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304289 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304312 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcggq\" (UniqueName: \"kubernetes.io/projected/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-kube-api-access-gcggq\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304403 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvx7r\" (UniqueName: \"kubernetes.io/projected/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-kube-api-access-kvx7r\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304451 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-config\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304499 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-logs\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304531 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.304585 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.416277 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-logs\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.423246 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.423382 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.423517 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.423634 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.423730 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-config-data\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.423879 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.423984 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcggq\" (UniqueName: \"kubernetes.io/projected/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-kube-api-access-gcggq\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.424175 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvx7r\" (UniqueName: \"kubernetes.io/projected/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-kube-api-access-kvx7r\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.424299 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-config\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.417548 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-logs\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.445529 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.450778 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.450920 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-config-data\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.451771 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.451907 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-config\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.460126 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.464829 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.464829 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.473292 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcggq\" (UniqueName: \"kubernetes.io/projected/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-kube-api-access-gcggq\") pod \"dnsmasq-dns-647df7b8c5-mxw5x\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.484748 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvx7r\" (UniqueName: \"kubernetes.io/projected/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-kube-api-access-kvx7r\") pod \"nova-api-0\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.582747 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.640582 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:55:37 crc kubenswrapper[5017]: I0129 06:55:37.926582 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-wxns2"] Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.123612 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7tq6"] Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.125415 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.132714 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.134565 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.137490 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7tq6"] Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.197264 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.207040 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.241576 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8swhj\" (UniqueName: \"kubernetes.io/projected/afac06a5-272c-48d3-8916-775a5bd3eb54-kube-api-access-8swhj\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.244529 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.244777 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-scripts\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.245031 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-config-data\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.347678 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-scripts\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.348112 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-config-data\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.348269 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8swhj\" (UniqueName: \"kubernetes.io/projected/afac06a5-272c-48d3-8916-775a5bd3eb54-kube-api-access-8swhj\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.348428 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.355097 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-config-data\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.355499 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.360052 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.360505 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-scripts\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.366799 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8swhj\" (UniqueName: \"kubernetes.io/projected/afac06a5-272c-48d3-8916-775a5bd3eb54-kube-api-access-8swhj\") pod \"nova-cell1-conductor-db-sync-x7tq6\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.369843 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.454121 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.498380 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-mxw5x"] Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.781994 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a35e9ed1-19c4-4412-b252-ead2940ab008","Type":"ContainerStarted","Data":"a97be1554d63dbd52ca3430d9ee0807ee7a4bcb524f66bf42f0c993486424c4a"} Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.785346 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db399f6d-aabe-4423-840e-56d457b5dcb4","Type":"ContainerStarted","Data":"58e1246502f7dcea9e36114c6dc9980692a65aa809ee119b94a902e79a194801"} Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.788985 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wxns2" event={"ID":"8e2f340e-2e9f-4711-a13b-1618bd1fbec4","Type":"ContainerStarted","Data":"ef3451697bd4ac7b679400900f02f4c1ce12227edc277ad5364540b647c35d9d"} Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.789020 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wxns2" event={"ID":"8e2f340e-2e9f-4711-a13b-1618bd1fbec4","Type":"ContainerStarted","Data":"6a6dba17997958e324e0509b5e2b007bda2ae931fc8bcd4f6117ad2ea587c1c9"} Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.797044 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" event={"ID":"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5","Type":"ContainerStarted","Data":"084edd2428805a9da0c644dd23701ca3f851eced4bc2784001abfe44ccb9b0d9"} Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.803887 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76a68bb7-8566-4ef7-9749-5578ae481f94","Type":"ContainerStarted","Data":"eb5cf11e2f7d0815952f1d8cb0e75615eca74ba7017abe4411a65d4751c0926f"} Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.808664 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a","Type":"ContainerStarted","Data":"c60e4f02dd103d8c29bab0512011f26e366ca9da403f29745bcedf2d5af98c22"} Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.823338 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-wxns2" podStartSLOduration=2.823312841 podStartE2EDuration="2.823312841s" podCreationTimestamp="2026-01-29 06:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:38.820589634 +0000 UTC m=+1225.195037254" watchObservedRunningTime="2026-01-29 06:55:38.823312841 +0000 UTC m=+1225.197760451" Jan 29 06:55:38 crc kubenswrapper[5017]: I0129 06:55:38.992187 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7tq6"] Jan 29 06:55:39 crc kubenswrapper[5017]: I0129 06:55:39.830328 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7tq6" event={"ID":"afac06a5-272c-48d3-8916-775a5bd3eb54","Type":"ContainerStarted","Data":"7126adb7a43e0f5a3fc8b5d5e778cd421a3c22b31f278ea3a1f03d7a07ee2372"} Jan 29 06:55:39 crc kubenswrapper[5017]: I0129 06:55:39.830799 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7tq6" event={"ID":"afac06a5-272c-48d3-8916-775a5bd3eb54","Type":"ContainerStarted","Data":"f88075c44b989a898041c59c1a1a8fabc4f64806af7ff5ad65b66ad2ef41162b"} Jan 29 06:55:39 crc kubenswrapper[5017]: I0129 06:55:39.841127 5017 generic.go:334] "Generic (PLEG): container finished" podID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" containerID="c788ef46bab35f53e33c6baaa48d0bd6f3b7c657b75f112f88d7a6c1905f839d" exitCode=0 Jan 29 06:55:39 crc kubenswrapper[5017]: I0129 06:55:39.841312 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" event={"ID":"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5","Type":"ContainerDied","Data":"c788ef46bab35f53e33c6baaa48d0bd6f3b7c657b75f112f88d7a6c1905f839d"} Jan 29 06:55:39 crc kubenswrapper[5017]: I0129 06:55:39.856413 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-x7tq6" podStartSLOduration=1.856388129 podStartE2EDuration="1.856388129s" podCreationTimestamp="2026-01-29 06:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:39.850875873 +0000 UTC m=+1226.225323513" watchObservedRunningTime="2026-01-29 06:55:39.856388129 +0000 UTC m=+1226.230835739" Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.358548 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.380169 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.876800 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" event={"ID":"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5","Type":"ContainerStarted","Data":"f85c4459d8f828738428873cf981d75378869b406a3bc14f9884b979c0dfee7a"} Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.879925 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.888298 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76a68bb7-8566-4ef7-9749-5578ae481f94","Type":"ContainerStarted","Data":"53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af"} Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.897381 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a","Type":"ContainerStarted","Data":"fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18"} Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.900338 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db399f6d-aabe-4423-840e-56d457b5dcb4","Type":"ContainerStarted","Data":"9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4"} Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.909431 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a35e9ed1-19c4-4412-b252-ead2940ab008","Type":"ContainerStarted","Data":"e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97"} Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.909623 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a35e9ed1-19c4-4412-b252-ead2940ab008" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97" gracePeriod=30 Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.911225 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" podStartSLOduration=4.911198034 podStartE2EDuration="4.911198034s" podCreationTimestamp="2026-01-29 06:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:41.900699145 +0000 UTC m=+1228.275146755" watchObservedRunningTime="2026-01-29 06:55:41.911198034 +0000 UTC m=+1228.285645644" Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.987519 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.954042871 podStartE2EDuration="5.987484568s" podCreationTimestamp="2026-01-29 06:55:36 +0000 UTC" firstStartedPulling="2026-01-29 06:55:38.243993313 +0000 UTC m=+1224.618440923" lastFinishedPulling="2026-01-29 06:55:41.27743501 +0000 UTC m=+1227.651882620" observedRunningTime="2026-01-29 06:55:41.971651707 +0000 UTC m=+1228.346099337" watchObservedRunningTime="2026-01-29 06:55:41.987484568 +0000 UTC m=+1228.361932178" Jan 29 06:55:41 crc kubenswrapper[5017]: I0129 06:55:41.989579 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.079844768 podStartE2EDuration="5.98956541s" podCreationTimestamp="2026-01-29 06:55:36 +0000 UTC" firstStartedPulling="2026-01-29 06:55:38.371199774 +0000 UTC m=+1224.745647384" lastFinishedPulling="2026-01-29 06:55:41.280920416 +0000 UTC m=+1227.655368026" observedRunningTime="2026-01-29 06:55:41.946368663 +0000 UTC m=+1228.320816273" watchObservedRunningTime="2026-01-29 06:55:41.98956541 +0000 UTC m=+1228.364013020" Jan 29 06:55:42 crc kubenswrapper[5017]: I0129 06:55:42.206737 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:55:42 crc kubenswrapper[5017]: I0129 06:55:42.453121 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 06:55:42 crc kubenswrapper[5017]: I0129 06:55:42.921221 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76a68bb7-8566-4ef7-9749-5578ae481f94","Type":"ContainerStarted","Data":"9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6"} Jan 29 06:55:42 crc kubenswrapper[5017]: I0129 06:55:42.921717 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerName="nova-metadata-log" containerID="cri-o://53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af" gracePeriod=30 Jan 29 06:55:42 crc kubenswrapper[5017]: I0129 06:55:42.922422 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerName="nova-metadata-metadata" containerID="cri-o://9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6" gracePeriod=30 Jan 29 06:55:42 crc kubenswrapper[5017]: I0129 06:55:42.930010 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a","Type":"ContainerStarted","Data":"12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd"} Jan 29 06:55:42 crc kubenswrapper[5017]: I0129 06:55:42.947626 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.902653463 podStartE2EDuration="6.947603914s" podCreationTimestamp="2026-01-29 06:55:36 +0000 UTC" firstStartedPulling="2026-01-29 06:55:38.232870048 +0000 UTC m=+1224.607317658" lastFinishedPulling="2026-01-29 06:55:41.277820499 +0000 UTC m=+1227.652268109" observedRunningTime="2026-01-29 06:55:42.945223805 +0000 UTC m=+1229.319671415" watchObservedRunningTime="2026-01-29 06:55:42.947603914 +0000 UTC m=+1229.322051524" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.534517 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.558496 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.643389088 podStartE2EDuration="6.558466682s" podCreationTimestamp="2026-01-29 06:55:37 +0000 UTC" firstStartedPulling="2026-01-29 06:55:38.369328588 +0000 UTC m=+1224.743776198" lastFinishedPulling="2026-01-29 06:55:41.284406182 +0000 UTC m=+1227.658853792" observedRunningTime="2026-01-29 06:55:42.97498817 +0000 UTC m=+1229.349435790" watchObservedRunningTime="2026-01-29 06:55:43.558466682 +0000 UTC m=+1229.932914292" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.584670 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-combined-ca-bundle\") pod \"76a68bb7-8566-4ef7-9749-5578ae481f94\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.584746 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x68wm\" (UniqueName: \"kubernetes.io/projected/76a68bb7-8566-4ef7-9749-5578ae481f94-kube-api-access-x68wm\") pod \"76a68bb7-8566-4ef7-9749-5578ae481f94\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.584808 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a68bb7-8566-4ef7-9749-5578ae481f94-logs\") pod \"76a68bb7-8566-4ef7-9749-5578ae481f94\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.585035 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-config-data\") pod \"76a68bb7-8566-4ef7-9749-5578ae481f94\" (UID: \"76a68bb7-8566-4ef7-9749-5578ae481f94\") " Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.585283 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76a68bb7-8566-4ef7-9749-5578ae481f94-logs" (OuterVolumeSpecName: "logs") pod "76a68bb7-8566-4ef7-9749-5578ae481f94" (UID: "76a68bb7-8566-4ef7-9749-5578ae481f94"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.585585 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a68bb7-8566-4ef7-9749-5578ae481f94-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.592877 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a68bb7-8566-4ef7-9749-5578ae481f94-kube-api-access-x68wm" (OuterVolumeSpecName: "kube-api-access-x68wm") pod "76a68bb7-8566-4ef7-9749-5578ae481f94" (UID: "76a68bb7-8566-4ef7-9749-5578ae481f94"). InnerVolumeSpecName "kube-api-access-x68wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.624304 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-config-data" (OuterVolumeSpecName: "config-data") pod "76a68bb7-8566-4ef7-9749-5578ae481f94" (UID: "76a68bb7-8566-4ef7-9749-5578ae481f94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.639543 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76a68bb7-8566-4ef7-9749-5578ae481f94" (UID: "76a68bb7-8566-4ef7-9749-5578ae481f94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.687514 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.687568 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x68wm\" (UniqueName: \"kubernetes.io/projected/76a68bb7-8566-4ef7-9749-5578ae481f94-kube-api-access-x68wm\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.687587 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a68bb7-8566-4ef7-9749-5578ae481f94-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.942249 5017 generic.go:334] "Generic (PLEG): container finished" podID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerID="9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6" exitCode=0 Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.942296 5017 generic.go:334] "Generic (PLEG): container finished" podID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerID="53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af" exitCode=143 Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.942330 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.942416 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76a68bb7-8566-4ef7-9749-5578ae481f94","Type":"ContainerDied","Data":"9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6"} Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.942491 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76a68bb7-8566-4ef7-9749-5578ae481f94","Type":"ContainerDied","Data":"53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af"} Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.942512 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76a68bb7-8566-4ef7-9749-5578ae481f94","Type":"ContainerDied","Data":"eb5cf11e2f7d0815952f1d8cb0e75615eca74ba7017abe4411a65d4751c0926f"} Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.942537 5017 scope.go:117] "RemoveContainer" containerID="9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.971503 5017 scope.go:117] "RemoveContainer" containerID="53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af" Jan 29 06:55:43 crc kubenswrapper[5017]: I0129 06:55:43.997642 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.010621 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.021876 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:44 crc kubenswrapper[5017]: E0129 06:55:44.022625 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerName="nova-metadata-metadata" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.022650 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerName="nova-metadata-metadata" Jan 29 06:55:44 crc kubenswrapper[5017]: E0129 06:55:44.022666 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerName="nova-metadata-log" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.022673 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerName="nova-metadata-log" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.022916 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerName="nova-metadata-log" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.022941 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a68bb7-8566-4ef7-9749-5578ae481f94" containerName="nova-metadata-metadata" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.024050 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.025332 5017 scope.go:117] "RemoveContainer" containerID="9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6" Jan 29 06:55:44 crc kubenswrapper[5017]: E0129 06:55:44.026166 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6\": container with ID starting with 9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6 not found: ID does not exist" containerID="9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.026229 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6"} err="failed to get container status \"9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6\": rpc error: code = NotFound desc = could not find container \"9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6\": container with ID starting with 9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6 not found: ID does not exist" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.026274 5017 scope.go:117] "RemoveContainer" containerID="53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af" Jan 29 06:55:44 crc kubenswrapper[5017]: E0129 06:55:44.026723 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af\": container with ID starting with 53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af not found: ID does not exist" containerID="53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.026786 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af"} err="failed to get container status \"53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af\": rpc error: code = NotFound desc = could not find container \"53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af\": container with ID starting with 53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af not found: ID does not exist" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.026816 5017 scope.go:117] "RemoveContainer" containerID="9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.028260 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.032168 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6"} err="failed to get container status \"9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6\": rpc error: code = NotFound desc = could not find container \"9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6\": container with ID starting with 9ac25b2050f18ac311238d9b864d5a8a1fcca28b5c5572f41cecb394004b50d6 not found: ID does not exist" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.032233 5017 scope.go:117] "RemoveContainer" containerID="53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.033026 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.035159 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af"} err="failed to get container status \"53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af\": rpc error: code = NotFound desc = could not find container \"53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af\": container with ID starting with 53889c52f8902c7591aeb65ab36d036024c17c376d0954748e57004acdd087af not found: ID does not exist" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.041201 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.095545 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-config-data\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.095650 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.095685 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43780e1e-04f5-4c49-939d-98d67873960e-logs\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.096072 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w4vk\" (UniqueName: \"kubernetes.io/projected/43780e1e-04f5-4c49-939d-98d67873960e-kube-api-access-4w4vk\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.096205 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.197099 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-config-data\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.197175 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.197196 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43780e1e-04f5-4c49-939d-98d67873960e-logs\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.197269 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w4vk\" (UniqueName: \"kubernetes.io/projected/43780e1e-04f5-4c49-939d-98d67873960e-kube-api-access-4w4vk\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.197303 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.197874 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43780e1e-04f5-4c49-939d-98d67873960e-logs\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.203094 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.204283 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.206493 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-config-data\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.217497 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w4vk\" (UniqueName: \"kubernetes.io/projected/43780e1e-04f5-4c49-939d-98d67873960e-kube-api-access-4w4vk\") pod \"nova-metadata-0\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.328770 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a68bb7-8566-4ef7-9749-5578ae481f94" path="/var/lib/kubelet/pods/76a68bb7-8566-4ef7-9749-5578ae481f94/volumes" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.346252 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.856055 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:44 crc kubenswrapper[5017]: I0129 06:55:44.957574 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43780e1e-04f5-4c49-939d-98d67873960e","Type":"ContainerStarted","Data":"a019664525639f4ec62b8982ce9ca8287057a2f5821aac304cb449f63455492a"} Jan 29 06:55:45 crc kubenswrapper[5017]: I0129 06:55:45.973474 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43780e1e-04f5-4c49-939d-98d67873960e","Type":"ContainerStarted","Data":"83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5"} Jan 29 06:55:45 crc kubenswrapper[5017]: I0129 06:55:45.974148 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43780e1e-04f5-4c49-939d-98d67873960e","Type":"ContainerStarted","Data":"5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15"} Jan 29 06:55:46 crc kubenswrapper[5017]: I0129 06:55:46.015542 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.015514046 podStartE2EDuration="3.015514046s" podCreationTimestamp="2026-01-29 06:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:46.001835757 +0000 UTC m=+1232.376283407" watchObservedRunningTime="2026-01-29 06:55:46.015514046 +0000 UTC m=+1232.389961646" Jan 29 06:55:46 crc kubenswrapper[5017]: I0129 06:55:46.985187 5017 generic.go:334] "Generic (PLEG): container finished" podID="8e2f340e-2e9f-4711-a13b-1618bd1fbec4" containerID="ef3451697bd4ac7b679400900f02f4c1ce12227edc277ad5364540b647c35d9d" exitCode=0 Jan 29 06:55:46 crc kubenswrapper[5017]: I0129 06:55:46.985613 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wxns2" event={"ID":"8e2f340e-2e9f-4711-a13b-1618bd1fbec4","Type":"ContainerDied","Data":"ef3451697bd4ac7b679400900f02f4c1ce12227edc277ad5364540b647c35d9d"} Jan 29 06:55:46 crc kubenswrapper[5017]: I0129 06:55:46.989946 5017 generic.go:334] "Generic (PLEG): container finished" podID="afac06a5-272c-48d3-8916-775a5bd3eb54" containerID="7126adb7a43e0f5a3fc8b5d5e778cd421a3c22b31f278ea3a1f03d7a07ee2372" exitCode=0 Jan 29 06:55:46 crc kubenswrapper[5017]: I0129 06:55:46.990055 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7tq6" event={"ID":"afac06a5-272c-48d3-8916-775a5bd3eb54","Type":"ContainerDied","Data":"7126adb7a43e0f5a3fc8b5d5e778cd421a3c22b31f278ea3a1f03d7a07ee2372"} Jan 29 06:55:47 crc kubenswrapper[5017]: I0129 06:55:47.281933 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 06:55:47 crc kubenswrapper[5017]: I0129 06:55:47.452997 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 06:55:47 crc kubenswrapper[5017]: I0129 06:55:47.496574 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 06:55:47 crc kubenswrapper[5017]: I0129 06:55:47.584244 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:55:47 crc kubenswrapper[5017]: I0129 06:55:47.643042 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 06:55:47 crc kubenswrapper[5017]: I0129 06:55:47.643110 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 06:55:47 crc kubenswrapper[5017]: I0129 06:55:47.680025 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-zsnd5"] Jan 29 06:55:47 crc kubenswrapper[5017]: I0129 06:55:47.680326 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" podUID="223272bf-db73-426c-ad7e-78093ad4316a" containerName="dnsmasq-dns" containerID="cri-o://b1e4243c0e5113057f81a19968de08adad0ab0c168be64b85e394c209e09192c" gracePeriod=10 Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.003487 5017 generic.go:334] "Generic (PLEG): container finished" podID="223272bf-db73-426c-ad7e-78093ad4316a" containerID="b1e4243c0e5113057f81a19968de08adad0ab0c168be64b85e394c209e09192c" exitCode=0 Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.003733 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" event={"ID":"223272bf-db73-426c-ad7e-78093ad4316a","Type":"ContainerDied","Data":"b1e4243c0e5113057f81a19968de08adad0ab0c168be64b85e394c209e09192c"} Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.076244 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.368237 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.499901 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-sb\") pod \"223272bf-db73-426c-ad7e-78093ad4316a\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.500088 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-config\") pod \"223272bf-db73-426c-ad7e-78093ad4316a\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.500175 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-swift-storage-0\") pod \"223272bf-db73-426c-ad7e-78093ad4316a\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.500347 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4gps\" (UniqueName: \"kubernetes.io/projected/223272bf-db73-426c-ad7e-78093ad4316a-kube-api-access-n4gps\") pod \"223272bf-db73-426c-ad7e-78093ad4316a\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.500402 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-svc\") pod \"223272bf-db73-426c-ad7e-78093ad4316a\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.500564 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-nb\") pod \"223272bf-db73-426c-ad7e-78093ad4316a\" (UID: \"223272bf-db73-426c-ad7e-78093ad4316a\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.523643 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223272bf-db73-426c-ad7e-78093ad4316a-kube-api-access-n4gps" (OuterVolumeSpecName: "kube-api-access-n4gps") pod "223272bf-db73-426c-ad7e-78093ad4316a" (UID: "223272bf-db73-426c-ad7e-78093ad4316a"). InnerVolumeSpecName "kube-api-access-n4gps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.613837 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4gps\" (UniqueName: \"kubernetes.io/projected/223272bf-db73-426c-ad7e-78093ad4316a-kube-api-access-n4gps\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.710815 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "223272bf-db73-426c-ad7e-78093ad4316a" (UID: "223272bf-db73-426c-ad7e-78093ad4316a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.715644 5017 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.728600 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.728623 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "223272bf-db73-426c-ad7e-78093ad4316a" (UID: "223272bf-db73-426c-ad7e-78093ad4316a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.728828 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.739616 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "223272bf-db73-426c-ad7e-78093ad4316a" (UID: "223272bf-db73-426c-ad7e-78093ad4316a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.787401 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "223272bf-db73-426c-ad7e-78093ad4316a" (UID: "223272bf-db73-426c-ad7e-78093ad4316a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.791583 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-config" (OuterVolumeSpecName: "config") pod "223272bf-db73-426c-ad7e-78093ad4316a" (UID: "223272bf-db73-426c-ad7e-78093ad4316a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.797389 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.817367 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.825056 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-scripts\") pod \"afac06a5-272c-48d3-8916-775a5bd3eb54\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.825184 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8swhj\" (UniqueName: \"kubernetes.io/projected/afac06a5-272c-48d3-8916-775a5bd3eb54-kube-api-access-8swhj\") pod \"afac06a5-272c-48d3-8916-775a5bd3eb54\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.825270 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-config-data\") pod \"afac06a5-272c-48d3-8916-775a5bd3eb54\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.825554 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-combined-ca-bundle\") pod \"afac06a5-272c-48d3-8916-775a5bd3eb54\" (UID: \"afac06a5-272c-48d3-8916-775a5bd3eb54\") " Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.826179 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.826199 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.826210 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.826221 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/223272bf-db73-426c-ad7e-78093ad4316a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.837457 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afac06a5-272c-48d3-8916-775a5bd3eb54-kube-api-access-8swhj" (OuterVolumeSpecName: "kube-api-access-8swhj") pod "afac06a5-272c-48d3-8916-775a5bd3eb54" (UID: "afac06a5-272c-48d3-8916-775a5bd3eb54"). InnerVolumeSpecName "kube-api-access-8swhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.847213 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-scripts" (OuterVolumeSpecName: "scripts") pod "afac06a5-272c-48d3-8916-775a5bd3eb54" (UID: "afac06a5-272c-48d3-8916-775a5bd3eb54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.948631 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-config-data" (OuterVolumeSpecName: "config-data") pod "afac06a5-272c-48d3-8916-775a5bd3eb54" (UID: "afac06a5-272c-48d3-8916-775a5bd3eb54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.948744 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afac06a5-272c-48d3-8916-775a5bd3eb54" (UID: "afac06a5-272c-48d3-8916-775a5bd3eb54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.955072 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:48 crc kubenswrapper[5017]: I0129 06:55:48.955177 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8swhj\" (UniqueName: \"kubernetes.io/projected/afac06a5-272c-48d3-8916-775a5bd3eb54-kube-api-access-8swhj\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.037281 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" event={"ID":"223272bf-db73-426c-ad7e-78093ad4316a","Type":"ContainerDied","Data":"5ec1f2da8d084bbf63c84f935a93a8d43f513fd39374bcb78564c645994c5e00"} Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.037416 5017 scope.go:117] "RemoveContainer" containerID="b1e4243c0e5113057f81a19968de08adad0ab0c168be64b85e394c209e09192c" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.046222 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-zsnd5" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.056789 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-scripts\") pod \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.056840 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-combined-ca-bundle\") pod \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.056988 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm2d8\" (UniqueName: \"kubernetes.io/projected/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-kube-api-access-sm2d8\") pod \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.057210 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-config-data\") pod \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\" (UID: \"8e2f340e-2e9f-4711-a13b-1618bd1fbec4\") " Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.060170 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.060228 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afac06a5-272c-48d3-8916-775a5bd3eb54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.070429 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-scripts" (OuterVolumeSpecName: "scripts") pod "8e2f340e-2e9f-4711-a13b-1618bd1fbec4" (UID: "8e2f340e-2e9f-4711-a13b-1618bd1fbec4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.075220 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-kube-api-access-sm2d8" (OuterVolumeSpecName: "kube-api-access-sm2d8") pod "8e2f340e-2e9f-4711-a13b-1618bd1fbec4" (UID: "8e2f340e-2e9f-4711-a13b-1618bd1fbec4"). InnerVolumeSpecName "kube-api-access-sm2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.077066 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-wxns2" event={"ID":"8e2f340e-2e9f-4711-a13b-1618bd1fbec4","Type":"ContainerDied","Data":"6a6dba17997958e324e0509b5e2b007bda2ae931fc8bcd4f6117ad2ea587c1c9"} Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.077110 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6dba17997958e324e0509b5e2b007bda2ae931fc8bcd4f6117ad2ea587c1c9" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.077120 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-wxns2" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.081655 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7tq6" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.082104 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7tq6" event={"ID":"afac06a5-272c-48d3-8916-775a5bd3eb54","Type":"ContainerDied","Data":"f88075c44b989a898041c59c1a1a8fabc4f64806af7ff5ad65b66ad2ef41162b"} Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.082164 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f88075c44b989a898041c59c1a1a8fabc4f64806af7ff5ad65b66ad2ef41162b" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.109581 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-config-data" (OuterVolumeSpecName: "config-data") pod "8e2f340e-2e9f-4711-a13b-1618bd1fbec4" (UID: "8e2f340e-2e9f-4711-a13b-1618bd1fbec4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.130108 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e2f340e-2e9f-4711-a13b-1618bd1fbec4" (UID: "8e2f340e-2e9f-4711-a13b-1618bd1fbec4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.164488 5017 scope.go:117] "RemoveContainer" containerID="765e5c7055966bd5123ab29138201b56f855c7c00e3b585bfed023db37b82943" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.168923 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.169032 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.169066 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.169081 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm2d8\" (UniqueName: \"kubernetes.io/projected/8e2f340e-2e9f-4711-a13b-1618bd1fbec4-kube-api-access-sm2d8\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.179403 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 06:55:49 crc kubenswrapper[5017]: E0129 06:55:49.181341 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223272bf-db73-426c-ad7e-78093ad4316a" containerName="dnsmasq-dns" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.181447 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="223272bf-db73-426c-ad7e-78093ad4316a" containerName="dnsmasq-dns" Jan 29 06:55:49 crc kubenswrapper[5017]: E0129 06:55:49.181530 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afac06a5-272c-48d3-8916-775a5bd3eb54" containerName="nova-cell1-conductor-db-sync" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.181583 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="afac06a5-272c-48d3-8916-775a5bd3eb54" containerName="nova-cell1-conductor-db-sync" Jan 29 06:55:49 crc kubenswrapper[5017]: E0129 06:55:49.181638 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2f340e-2e9f-4711-a13b-1618bd1fbec4" containerName="nova-manage" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.181687 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2f340e-2e9f-4711-a13b-1618bd1fbec4" containerName="nova-manage" Jan 29 06:55:49 crc kubenswrapper[5017]: E0129 06:55:49.181756 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223272bf-db73-426c-ad7e-78093ad4316a" containerName="init" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.182207 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="223272bf-db73-426c-ad7e-78093ad4316a" containerName="init" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.182497 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="223272bf-db73-426c-ad7e-78093ad4316a" containerName="dnsmasq-dns" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.182567 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="afac06a5-272c-48d3-8916-775a5bd3eb54" containerName="nova-cell1-conductor-db-sync" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.182634 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e2f340e-2e9f-4711-a13b-1618bd1fbec4" containerName="nova-manage" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.183470 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.188648 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.216036 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.242632 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-zsnd5"] Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.264596 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-zsnd5"] Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.285777 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.286455 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-log" containerID="cri-o://fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18" gracePeriod=30 Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.287085 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-api" containerID="cri-o://12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd" gracePeriod=30 Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.332114 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.332896 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="43780e1e-04f5-4c49-939d-98d67873960e" containerName="nova-metadata-log" containerID="cri-o://5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15" gracePeriod=30 Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.333086 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="43780e1e-04f5-4c49-939d-98d67873960e" containerName="nova-metadata-metadata" containerID="cri-o://83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5" gracePeriod=30 Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.347746 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.347828 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.372852 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbwpd\" (UniqueName: \"kubernetes.io/projected/18edd5b3-27eb-43f3-8d6b-03490c243c78-kube-api-access-nbwpd\") pod \"nova-cell1-conductor-0\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.373250 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.373311 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.445601 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.476477 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.476535 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.476635 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbwpd\" (UniqueName: \"kubernetes.io/projected/18edd5b3-27eb-43f3-8d6b-03490c243c78-kube-api-access-nbwpd\") pod \"nova-cell1-conductor-0\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.480630 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.481534 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.500720 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbwpd\" (UniqueName: \"kubernetes.io/projected/18edd5b3-27eb-43f3-8d6b-03490c243c78-kube-api-access-nbwpd\") pod \"nova-cell1-conductor-0\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:49 crc kubenswrapper[5017]: I0129 06:55:49.518936 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.045586 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.094906 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43780e1e-04f5-4c49-939d-98d67873960e-logs\") pod \"43780e1e-04f5-4c49-939d-98d67873960e\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.095018 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-nova-metadata-tls-certs\") pod \"43780e1e-04f5-4c49-939d-98d67873960e\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.095055 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-combined-ca-bundle\") pod \"43780e1e-04f5-4c49-939d-98d67873960e\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.095100 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w4vk\" (UniqueName: \"kubernetes.io/projected/43780e1e-04f5-4c49-939d-98d67873960e-kube-api-access-4w4vk\") pod \"43780e1e-04f5-4c49-939d-98d67873960e\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.095135 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-config-data\") pod \"43780e1e-04f5-4c49-939d-98d67873960e\" (UID: \"43780e1e-04f5-4c49-939d-98d67873960e\") " Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.095842 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43780e1e-04f5-4c49-939d-98d67873960e-logs" (OuterVolumeSpecName: "logs") pod "43780e1e-04f5-4c49-939d-98d67873960e" (UID: "43780e1e-04f5-4c49-939d-98d67873960e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.124614 5017 generic.go:334] "Generic (PLEG): container finished" podID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerID="fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18" exitCode=143 Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.124823 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a","Type":"ContainerDied","Data":"fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18"} Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.132429 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43780e1e-04f5-4c49-939d-98d67873960e-kube-api-access-4w4vk" (OuterVolumeSpecName: "kube-api-access-4w4vk") pod "43780e1e-04f5-4c49-939d-98d67873960e" (UID: "43780e1e-04f5-4c49-939d-98d67873960e"). InnerVolumeSpecName "kube-api-access-4w4vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.159331 5017 generic.go:334] "Generic (PLEG): container finished" podID="43780e1e-04f5-4c49-939d-98d67873960e" containerID="83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5" exitCode=0 Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.159387 5017 generic.go:334] "Generic (PLEG): container finished" podID="43780e1e-04f5-4c49-939d-98d67873960e" containerID="5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15" exitCode=143 Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.159510 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43780e1e-04f5-4c49-939d-98d67873960e","Type":"ContainerDied","Data":"83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5"} Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.159564 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43780e1e-04f5-4c49-939d-98d67873960e","Type":"ContainerDied","Data":"5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15"} Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.159581 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43780e1e-04f5-4c49-939d-98d67873960e","Type":"ContainerDied","Data":"a019664525639f4ec62b8982ce9ca8287057a2f5821aac304cb449f63455492a"} Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.159605 5017 scope.go:117] "RemoveContainer" containerID="83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.159855 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.170709 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-config-data" (OuterVolumeSpecName: "config-data") pod "43780e1e-04f5-4c49-939d-98d67873960e" (UID: "43780e1e-04f5-4c49-939d-98d67873960e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.173374 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="db399f6d-aabe-4423-840e-56d457b5dcb4" containerName="nova-scheduler-scheduler" containerID="cri-o://9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4" gracePeriod=30 Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.180067 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43780e1e-04f5-4c49-939d-98d67873960e" (UID: "43780e1e-04f5-4c49-939d-98d67873960e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.196523 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.198594 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43780e1e-04f5-4c49-939d-98d67873960e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.198625 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.198636 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w4vk\" (UniqueName: \"kubernetes.io/projected/43780e1e-04f5-4c49-939d-98d67873960e-kube-api-access-4w4vk\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.198647 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.211910 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "43780e1e-04f5-4c49-939d-98d67873960e" (UID: "43780e1e-04f5-4c49-939d-98d67873960e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.260089 5017 scope.go:117] "RemoveContainer" containerID="5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.301735 5017 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43780e1e-04f5-4c49-939d-98d67873960e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.311723 5017 scope.go:117] "RemoveContainer" containerID="83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5" Jan 29 06:55:50 crc kubenswrapper[5017]: E0129 06:55:50.314130 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5\": container with ID starting with 83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5 not found: ID does not exist" containerID="83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.314193 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5"} err="failed to get container status \"83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5\": rpc error: code = NotFound desc = could not find container \"83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5\": container with ID starting with 83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5 not found: ID does not exist" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.314227 5017 scope.go:117] "RemoveContainer" containerID="5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15" Jan 29 06:55:50 crc kubenswrapper[5017]: E0129 06:55:50.319769 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15\": container with ID starting with 5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15 not found: ID does not exist" containerID="5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.321051 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15"} err="failed to get container status \"5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15\": rpc error: code = NotFound desc = could not find container \"5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15\": container with ID starting with 5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15 not found: ID does not exist" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.321192 5017 scope.go:117] "RemoveContainer" containerID="83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.321895 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5"} err="failed to get container status \"83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5\": rpc error: code = NotFound desc = could not find container \"83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5\": container with ID starting with 83e5c6842c2ae4c90dd1467a268fb1c1c4d3f39298700fd9e606fd4c1e14c9a5 not found: ID does not exist" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.321947 5017 scope.go:117] "RemoveContainer" containerID="5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.326211 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15"} err="failed to get container status \"5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15\": rpc error: code = NotFound desc = could not find container \"5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15\": container with ID starting with 5e6927377f55852c53bcd9d42bdc989c99f22f1d933df0a1b5efe37258571d15 not found: ID does not exist" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.343497 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223272bf-db73-426c-ad7e-78093ad4316a" path="/var/lib/kubelet/pods/223272bf-db73-426c-ad7e-78093ad4316a/volumes" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.493931 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.528805 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.547695 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:50 crc kubenswrapper[5017]: E0129 06:55:50.548336 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43780e1e-04f5-4c49-939d-98d67873960e" containerName="nova-metadata-metadata" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.548360 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="43780e1e-04f5-4c49-939d-98d67873960e" containerName="nova-metadata-metadata" Jan 29 06:55:50 crc kubenswrapper[5017]: E0129 06:55:50.548400 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43780e1e-04f5-4c49-939d-98d67873960e" containerName="nova-metadata-log" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.548408 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="43780e1e-04f5-4c49-939d-98d67873960e" containerName="nova-metadata-log" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.548613 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="43780e1e-04f5-4c49-939d-98d67873960e" containerName="nova-metadata-log" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.548641 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="43780e1e-04f5-4c49-939d-98d67873960e" containerName="nova-metadata-metadata" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.549870 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.552510 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.558430 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.563508 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.710114 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.710190 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.710231 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-config-data\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.710288 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-logs\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.710388 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4g8t\" (UniqueName: \"kubernetes.io/projected/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-kube-api-access-v4g8t\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.812378 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.812442 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-config-data\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.812507 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-logs\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.812551 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4g8t\" (UniqueName: \"kubernetes.io/projected/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-kube-api-access-v4g8t\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.813095 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-logs\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.814359 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.819896 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-config-data\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.820539 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.822828 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.833347 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4g8t\" (UniqueName: \"kubernetes.io/projected/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-kube-api-access-v4g8t\") pod \"nova-metadata-0\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " pod="openstack/nova-metadata-0" Jan 29 06:55:50 crc kubenswrapper[5017]: I0129 06:55:50.885932 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:55:51 crc kubenswrapper[5017]: I0129 06:55:51.194894 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"18edd5b3-27eb-43f3-8d6b-03490c243c78","Type":"ContainerStarted","Data":"cd271b62ca4015e030ca07e0ae6b52baec3e519d53550f369d0cfbcc931e68fd"} Jan 29 06:55:51 crc kubenswrapper[5017]: I0129 06:55:51.194985 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"18edd5b3-27eb-43f3-8d6b-03490c243c78","Type":"ContainerStarted","Data":"1ee7b87b7b7508d57233e55982afd114b99a55ec9e113941fb10b8d1395132ac"} Jan 29 06:55:51 crc kubenswrapper[5017]: I0129 06:55:51.197454 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:51 crc kubenswrapper[5017]: I0129 06:55:51.234347 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.234325859 podStartE2EDuration="2.234325859s" podCreationTimestamp="2026-01-29 06:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:51.214241432 +0000 UTC m=+1237.588689042" watchObservedRunningTime="2026-01-29 06:55:51.234325859 +0000 UTC m=+1237.608773469" Jan 29 06:55:51 crc kubenswrapper[5017]: I0129 06:55:51.382516 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:55:52 crc kubenswrapper[5017]: I0129 06:55:52.220416 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d","Type":"ContainerStarted","Data":"379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf"} Jan 29 06:55:52 crc kubenswrapper[5017]: I0129 06:55:52.220876 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d","Type":"ContainerStarted","Data":"0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d"} Jan 29 06:55:52 crc kubenswrapper[5017]: I0129 06:55:52.220895 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d","Type":"ContainerStarted","Data":"d0f385ddf1c2ee94db2b4863735a4f9a4514582de971fab42f498d62d8dda2d7"} Jan 29 06:55:52 crc kubenswrapper[5017]: I0129 06:55:52.250805 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.250772617 podStartE2EDuration="2.250772617s" podCreationTimestamp="2026-01-29 06:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:52.246577223 +0000 UTC m=+1238.621024843" watchObservedRunningTime="2026-01-29 06:55:52.250772617 +0000 UTC m=+1238.625220227" Jan 29 06:55:52 crc kubenswrapper[5017]: I0129 06:55:52.328765 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43780e1e-04f5-4c49-939d-98d67873960e" path="/var/lib/kubelet/pods/43780e1e-04f5-4c49-939d-98d67873960e/volumes" Jan 29 06:55:52 crc kubenswrapper[5017]: E0129 06:55:52.454777 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:55:52 crc kubenswrapper[5017]: E0129 06:55:52.457119 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:55:52 crc kubenswrapper[5017]: E0129 06:55:52.458914 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:55:52 crc kubenswrapper[5017]: E0129 06:55:52.459092 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="db399f6d-aabe-4423-840e-56d457b5dcb4" containerName="nova-scheduler-scheduler" Jan 29 06:55:52 crc kubenswrapper[5017]: I0129 06:55:52.773267 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:55:52 crc kubenswrapper[5017]: I0129 06:55:52.773537 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e0b13e83-038a-4d46-8a03-48f09dc18e43" containerName="kube-state-metrics" containerID="cri-o://f82af6768cb039827e81b07eb73693e5eba73d98bc49a2556e0d429cec64be8c" gracePeriod=30 Jan 29 06:55:53 crc kubenswrapper[5017]: I0129 06:55:53.235169 5017 generic.go:334] "Generic (PLEG): container finished" podID="e0b13e83-038a-4d46-8a03-48f09dc18e43" containerID="f82af6768cb039827e81b07eb73693e5eba73d98bc49a2556e0d429cec64be8c" exitCode=2 Jan 29 06:55:53 crc kubenswrapper[5017]: I0129 06:55:53.235262 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e0b13e83-038a-4d46-8a03-48f09dc18e43","Type":"ContainerDied","Data":"f82af6768cb039827e81b07eb73693e5eba73d98bc49a2556e0d429cec64be8c"} Jan 29 06:55:53 crc kubenswrapper[5017]: I0129 06:55:53.327494 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 06:55:53 crc kubenswrapper[5017]: I0129 06:55:53.489006 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cqn2\" (UniqueName: \"kubernetes.io/projected/e0b13e83-038a-4d46-8a03-48f09dc18e43-kube-api-access-2cqn2\") pod \"e0b13e83-038a-4d46-8a03-48f09dc18e43\" (UID: \"e0b13e83-038a-4d46-8a03-48f09dc18e43\") " Jan 29 06:55:53 crc kubenswrapper[5017]: I0129 06:55:53.496226 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b13e83-038a-4d46-8a03-48f09dc18e43-kube-api-access-2cqn2" (OuterVolumeSpecName: "kube-api-access-2cqn2") pod "e0b13e83-038a-4d46-8a03-48f09dc18e43" (UID: "e0b13e83-038a-4d46-8a03-48f09dc18e43"). InnerVolumeSpecName "kube-api-access-2cqn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:53 crc kubenswrapper[5017]: I0129 06:55:53.591374 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cqn2\" (UniqueName: \"kubernetes.io/projected/e0b13e83-038a-4d46-8a03-48f09dc18e43-kube-api-access-2cqn2\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.006639 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.102044 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-config-data\") pod \"db399f6d-aabe-4423-840e-56d457b5dcb4\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.102141 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-combined-ca-bundle\") pod \"db399f6d-aabe-4423-840e-56d457b5dcb4\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.102228 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm542\" (UniqueName: \"kubernetes.io/projected/db399f6d-aabe-4423-840e-56d457b5dcb4-kube-api-access-vm542\") pod \"db399f6d-aabe-4423-840e-56d457b5dcb4\" (UID: \"db399f6d-aabe-4423-840e-56d457b5dcb4\") " Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.108464 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db399f6d-aabe-4423-840e-56d457b5dcb4-kube-api-access-vm542" (OuterVolumeSpecName: "kube-api-access-vm542") pod "db399f6d-aabe-4423-840e-56d457b5dcb4" (UID: "db399f6d-aabe-4423-840e-56d457b5dcb4"). InnerVolumeSpecName "kube-api-access-vm542". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.129424 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-config-data" (OuterVolumeSpecName: "config-data") pod "db399f6d-aabe-4423-840e-56d457b5dcb4" (UID: "db399f6d-aabe-4423-840e-56d457b5dcb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.135486 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db399f6d-aabe-4423-840e-56d457b5dcb4" (UID: "db399f6d-aabe-4423-840e-56d457b5dcb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.204565 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.204626 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm542\" (UniqueName: \"kubernetes.io/projected/db399f6d-aabe-4423-840e-56d457b5dcb4-kube-api-access-vm542\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.204643 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db399f6d-aabe-4423-840e-56d457b5dcb4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.247070 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e0b13e83-038a-4d46-8a03-48f09dc18e43","Type":"ContainerDied","Data":"813927801d9778f7647be6eb82d23767b4830daf4233271ec53c80c3c357bf13"} Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.247834 5017 scope.go:117] "RemoveContainer" containerID="f82af6768cb039827e81b07eb73693e5eba73d98bc49a2556e0d429cec64be8c" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.247154 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.255252 5017 generic.go:334] "Generic (PLEG): container finished" podID="db399f6d-aabe-4423-840e-56d457b5dcb4" containerID="9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4" exitCode=0 Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.255313 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db399f6d-aabe-4423-840e-56d457b5dcb4","Type":"ContainerDied","Data":"9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4"} Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.255319 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.255343 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db399f6d-aabe-4423-840e-56d457b5dcb4","Type":"ContainerDied","Data":"58e1246502f7dcea9e36114c6dc9980692a65aa809ee119b94a902e79a194801"} Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.291517 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.294241 5017 scope.go:117] "RemoveContainer" containerID="9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.337307 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.350267 5017 scope.go:117] "RemoveContainer" containerID="9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.351029 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:55:54 crc kubenswrapper[5017]: E0129 06:55:54.352416 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4\": container with ID starting with 9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4 not found: ID does not exist" containerID="9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.352485 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4"} err="failed to get container status \"9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4\": rpc error: code = NotFound desc = could not find container \"9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4\": container with ID starting with 9d0c65e51b85ba05d12159dce7ee82a418b6b4e72fa2ec46dc9f2c9a4c905fd4 not found: ID does not exist" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.365011 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:55:54 crc kubenswrapper[5017]: E0129 06:55:54.365717 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db399f6d-aabe-4423-840e-56d457b5dcb4" containerName="nova-scheduler-scheduler" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.365810 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="db399f6d-aabe-4423-840e-56d457b5dcb4" containerName="nova-scheduler-scheduler" Jan 29 06:55:54 crc kubenswrapper[5017]: E0129 06:55:54.365943 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b13e83-038a-4d46-8a03-48f09dc18e43" containerName="kube-state-metrics" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.366066 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b13e83-038a-4d46-8a03-48f09dc18e43" containerName="kube-state-metrics" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.366340 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b13e83-038a-4d46-8a03-48f09dc18e43" containerName="kube-state-metrics" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.366409 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="db399f6d-aabe-4423-840e-56d457b5dcb4" containerName="nova-scheduler-scheduler" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.367285 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.372451 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.372690 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.375174 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.386091 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.397036 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.398677 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.405713 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.414487 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.512625 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.513093 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.513215 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dn4q\" (UniqueName: \"kubernetes.io/projected/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-kube-api-access-9dn4q\") pod \"nova-scheduler-0\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.513329 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.513469 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxcld\" (UniqueName: \"kubernetes.io/projected/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-api-access-cxcld\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.513757 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.513873 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-config-data\") pod \"nova-scheduler-0\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.616599 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.616709 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.616764 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dn4q\" (UniqueName: \"kubernetes.io/projected/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-kube-api-access-9dn4q\") pod \"nova-scheduler-0\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.617011 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.617071 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxcld\" (UniqueName: \"kubernetes.io/projected/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-api-access-cxcld\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.617175 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.617226 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-config-data\") pod \"nova-scheduler-0\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.637539 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-config-data\") pod \"nova-scheduler-0\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.638013 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.639098 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.640991 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.641375 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.653856 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxcld\" (UniqueName: \"kubernetes.io/projected/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-api-access-cxcld\") pod \"kube-state-metrics-0\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.661622 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dn4q\" (UniqueName: \"kubernetes.io/projected/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-kube-api-access-9dn4q\") pod \"nova-scheduler-0\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " pod="openstack/nova-scheduler-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.695118 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 06:55:54 crc kubenswrapper[5017]: I0129 06:55:54.726798 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.252993 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.254083 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="ceilometer-central-agent" containerID="cri-o://5a5553bca56453bc7dd383b571e644904570a6ff37646732ab46bfc63d4356bb" gracePeriod=30 Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.254922 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="ceilometer-notification-agent" containerID="cri-o://f77b5ae493cc98d9d20046895a4f54e4268e388e3d9a9a199fa78ef8cda58123" gracePeriod=30 Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.259550 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="proxy-httpd" containerID="cri-o://d6f73092504fdd82e85c07a381c7b43721cdbd10af54bf12d28c31365e79fc0b" gracePeriod=30 Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.254926 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="sg-core" containerID="cri-o://6196da4d03910bc572acc5cc95df2d7dcae2ef492670dad5d9b79065497f7322" gracePeriod=30 Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.275201 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.330909 5017 generic.go:334] "Generic (PLEG): container finished" podID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerID="12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd" exitCode=0 Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.330982 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a","Type":"ContainerDied","Data":"12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd"} Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.331015 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a","Type":"ContainerDied","Data":"c60e4f02dd103d8c29bab0512011f26e366ca9da403f29745bcedf2d5af98c22"} Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.331036 5017 scope.go:117] "RemoveContainer" containerID="12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.331192 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.368665 5017 scope.go:117] "RemoveContainer" containerID="fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.401864 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.418100 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.428328 5017 scope.go:117] "RemoveContainer" containerID="12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd" Jan 29 06:55:55 crc kubenswrapper[5017]: E0129 06:55:55.429582 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd\": container with ID starting with 12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd not found: ID does not exist" containerID="12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.429636 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd"} err="failed to get container status \"12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd\": rpc error: code = NotFound desc = could not find container \"12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd\": container with ID starting with 12d4c58fa9c1467737f72b212fda9159cdce2af2219150ba1fd91d1371d488bd not found: ID does not exist" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.429673 5017 scope.go:117] "RemoveContainer" containerID="fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18" Jan 29 06:55:55 crc kubenswrapper[5017]: E0129 06:55:55.432288 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18\": container with ID starting with fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18 not found: ID does not exist" containerID="fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.432318 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18"} err="failed to get container status \"fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18\": rpc error: code = NotFound desc = could not find container \"fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18\": container with ID starting with fedc226e17785fa3152550c81a29a15532cff7a761ade51ead01b070ead12c18 not found: ID does not exist" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.435639 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-config-data\") pod \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.435851 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvx7r\" (UniqueName: \"kubernetes.io/projected/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-kube-api-access-kvx7r\") pod \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.435924 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-logs\") pod \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.436075 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-combined-ca-bundle\") pod \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\" (UID: \"fdf3cdff-4a01-4e98-b5d2-e15fcbef271a\") " Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.437757 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-logs" (OuterVolumeSpecName: "logs") pod "fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" (UID: "fdf3cdff-4a01-4e98-b5d2-e15fcbef271a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.442132 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-kube-api-access-kvx7r" (OuterVolumeSpecName: "kube-api-access-kvx7r") pod "fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" (UID: "fdf3cdff-4a01-4e98-b5d2-e15fcbef271a"). InnerVolumeSpecName "kube-api-access-kvx7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.465678 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-config-data" (OuterVolumeSpecName: "config-data") pod "fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" (UID: "fdf3cdff-4a01-4e98-b5d2-e15fcbef271a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.472186 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" (UID: "fdf3cdff-4a01-4e98-b5d2-e15fcbef271a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.539509 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.540019 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.540038 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvx7r\" (UniqueName: \"kubernetes.io/projected/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-kube-api-access-kvx7r\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.540056 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.684757 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.704372 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.731730 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 06:55:55 crc kubenswrapper[5017]: E0129 06:55:55.732309 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-log" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.732334 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-log" Jan 29 06:55:55 crc kubenswrapper[5017]: E0129 06:55:55.732354 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-api" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.732364 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-api" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.732586 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-api" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.732612 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" containerName="nova-api-log" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.733756 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.740468 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.745363 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.845871 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.846137 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpk9m\" (UniqueName: \"kubernetes.io/projected/dca84366-8a11-423d-be6c-cf6bc5f5571a-kube-api-access-wpk9m\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.846792 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-config-data\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.846979 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca84366-8a11-423d-be6c-cf6bc5f5571a-logs\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.887619 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.887830 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.949261 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca84366-8a11-423d-be6c-cf6bc5f5571a-logs\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.949354 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.949388 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpk9m\" (UniqueName: \"kubernetes.io/projected/dca84366-8a11-423d-be6c-cf6bc5f5571a-kube-api-access-wpk9m\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.949474 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-config-data\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.950627 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca84366-8a11-423d-be6c-cf6bc5f5571a-logs\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.954639 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-config-data\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.955113 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:55 crc kubenswrapper[5017]: I0129 06:55:55.976866 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpk9m\" (UniqueName: \"kubernetes.io/projected/dca84366-8a11-423d-be6c-cf6bc5f5571a-kube-api-access-wpk9m\") pod \"nova-api-0\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " pod="openstack/nova-api-0" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.063223 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.330494 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db399f6d-aabe-4423-840e-56d457b5dcb4" path="/var/lib/kubelet/pods/db399f6d-aabe-4423-840e-56d457b5dcb4/volumes" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.331731 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b13e83-038a-4d46-8a03-48f09dc18e43" path="/var/lib/kubelet/pods/e0b13e83-038a-4d46-8a03-48f09dc18e43/volumes" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.332329 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf3cdff-4a01-4e98-b5d2-e15fcbef271a" path="/var/lib/kubelet/pods/fdf3cdff-4a01-4e98-b5d2-e15fcbef271a/volumes" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.356893 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89fe4210-0650-40a1-a2f9-8d0a9ca9a640","Type":"ContainerStarted","Data":"67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303"} Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.356971 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89fe4210-0650-40a1-a2f9-8d0a9ca9a640","Type":"ContainerStarted","Data":"189a9ab1da3fccb8852185d4a52490c745dfcf1e243a9c346c3c79c2e2aa4c9a"} Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.387059 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.387036045 podStartE2EDuration="2.387036045s" podCreationTimestamp="2026-01-29 06:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:56.382052971 +0000 UTC m=+1242.756500581" watchObservedRunningTime="2026-01-29 06:55:56.387036045 +0000 UTC m=+1242.761483655" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.392883 5017 generic.go:334] "Generic (PLEG): container finished" podID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerID="d6f73092504fdd82e85c07a381c7b43721cdbd10af54bf12d28c31365e79fc0b" exitCode=0 Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.392932 5017 generic.go:334] "Generic (PLEG): container finished" podID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerID="6196da4d03910bc572acc5cc95df2d7dcae2ef492670dad5d9b79065497f7322" exitCode=2 Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.392943 5017 generic.go:334] "Generic (PLEG): container finished" podID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerID="f77b5ae493cc98d9d20046895a4f54e4268e388e3d9a9a199fa78ef8cda58123" exitCode=0 Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.393009 5017 generic.go:334] "Generic (PLEG): container finished" podID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerID="5a5553bca56453bc7dd383b571e644904570a6ff37646732ab46bfc63d4356bb" exitCode=0 Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.393067 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerDied","Data":"d6f73092504fdd82e85c07a381c7b43721cdbd10af54bf12d28c31365e79fc0b"} Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.393105 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerDied","Data":"6196da4d03910bc572acc5cc95df2d7dcae2ef492670dad5d9b79065497f7322"} Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.393116 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerDied","Data":"f77b5ae493cc98d9d20046895a4f54e4268e388e3d9a9a199fa78ef8cda58123"} Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.393125 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerDied","Data":"5a5553bca56453bc7dd383b571e644904570a6ff37646732ab46bfc63d4356bb"} Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.393137 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95cfbfdc-d831-48a6-9de9-5511bd7587d9","Type":"ContainerDied","Data":"f87c1308c09739af8816abd0e91a4011153bbbc62a743ca8ef9db06f26446dbc"} Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.393149 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f87c1308c09739af8816abd0e91a4011153bbbc62a743ca8ef9db06f26446dbc" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.407209 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a6ec780f-f6cc-4d8d-be76-f517dff0673c","Type":"ContainerStarted","Data":"1ab253ce158826a8eb8853f59892c8ebab6a5018fa1efc6e8ae6b7c7d6f3c586"} Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.407298 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a6ec780f-f6cc-4d8d-be76-f517dff0673c","Type":"ContainerStarted","Data":"9145350c14d6cd1b56f6bf2fb8df3b0a7d3a34b6258188c515ba10244a89aa8c"} Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.407356 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.408113 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.427693 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.042293086 podStartE2EDuration="2.427670152s" podCreationTimestamp="2026-01-29 06:55:54 +0000 UTC" firstStartedPulling="2026-01-29 06:55:55.428940452 +0000 UTC m=+1241.803388062" lastFinishedPulling="2026-01-29 06:55:55.814317528 +0000 UTC m=+1242.188765128" observedRunningTime="2026-01-29 06:55:56.426314158 +0000 UTC m=+1242.800761768" watchObservedRunningTime="2026-01-29 06:55:56.427670152 +0000 UTC m=+1242.802117762" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.560587 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzw5s\" (UniqueName: \"kubernetes.io/projected/95cfbfdc-d831-48a6-9de9-5511bd7587d9-kube-api-access-nzw5s\") pod \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.560723 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-log-httpd\") pod \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.560763 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-scripts\") pod \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.560803 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-run-httpd\") pod \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.560858 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-config-data\") pod \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.561085 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-combined-ca-bundle\") pod \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.561193 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-sg-core-conf-yaml\") pod \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\" (UID: \"95cfbfdc-d831-48a6-9de9-5511bd7587d9\") " Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.561343 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "95cfbfdc-d831-48a6-9de9-5511bd7587d9" (UID: "95cfbfdc-d831-48a6-9de9-5511bd7587d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.561384 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "95cfbfdc-d831-48a6-9de9-5511bd7587d9" (UID: "95cfbfdc-d831-48a6-9de9-5511bd7587d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.561744 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.561805 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95cfbfdc-d831-48a6-9de9-5511bd7587d9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.567808 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-scripts" (OuterVolumeSpecName: "scripts") pod "95cfbfdc-d831-48a6-9de9-5511bd7587d9" (UID: "95cfbfdc-d831-48a6-9de9-5511bd7587d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.568193 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cfbfdc-d831-48a6-9de9-5511bd7587d9-kube-api-access-nzw5s" (OuterVolumeSpecName: "kube-api-access-nzw5s") pod "95cfbfdc-d831-48a6-9de9-5511bd7587d9" (UID: "95cfbfdc-d831-48a6-9de9-5511bd7587d9"). InnerVolumeSpecName "kube-api-access-nzw5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.593901 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "95cfbfdc-d831-48a6-9de9-5511bd7587d9" (UID: "95cfbfdc-d831-48a6-9de9-5511bd7587d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.643426 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.663211 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzw5s\" (UniqueName: \"kubernetes.io/projected/95cfbfdc-d831-48a6-9de9-5511bd7587d9-kube-api-access-nzw5s\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.663576 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.663591 5017 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.676048 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95cfbfdc-d831-48a6-9de9-5511bd7587d9" (UID: "95cfbfdc-d831-48a6-9de9-5511bd7587d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.714145 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-config-data" (OuterVolumeSpecName: "config-data") pod "95cfbfdc-d831-48a6-9de9-5511bd7587d9" (UID: "95cfbfdc-d831-48a6-9de9-5511bd7587d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.769594 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:56 crc kubenswrapper[5017]: I0129 06:55:56.769640 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cfbfdc-d831-48a6-9de9-5511bd7587d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.420564 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dca84366-8a11-423d-be6c-cf6bc5f5571a","Type":"ContainerStarted","Data":"053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b"} Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.420655 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dca84366-8a11-423d-be6c-cf6bc5f5571a","Type":"ContainerStarted","Data":"c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8"} Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.420675 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dca84366-8a11-423d-be6c-cf6bc5f5571a","Type":"ContainerStarted","Data":"c6243ef1ab891074d23f34fa1ae4f1ca8f14734527521a5b7a6d1f8e6a1f1f8e"} Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.421654 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.448133 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.448100998 podStartE2EDuration="2.448100998s" podCreationTimestamp="2026-01-29 06:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:57.439449143 +0000 UTC m=+1243.813896753" watchObservedRunningTime="2026-01-29 06:55:57.448100998 +0000 UTC m=+1243.822548608" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.528519 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.540511 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.556935 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:57 crc kubenswrapper[5017]: E0129 06:55:57.557686 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="ceilometer-notification-agent" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.557718 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="ceilometer-notification-agent" Jan 29 06:55:57 crc kubenswrapper[5017]: E0129 06:55:57.557755 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="proxy-httpd" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.557766 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="proxy-httpd" Jan 29 06:55:57 crc kubenswrapper[5017]: E0129 06:55:57.557796 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="ceilometer-central-agent" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.557808 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="ceilometer-central-agent" Jan 29 06:55:57 crc kubenswrapper[5017]: E0129 06:55:57.557819 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="sg-core" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.557826 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="sg-core" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.558038 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="ceilometer-central-agent" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.558055 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="proxy-httpd" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.558072 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="ceilometer-notification-agent" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.558082 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" containerName="sg-core" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.560307 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.560428 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.564619 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.565701 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.566011 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.719451 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.719533 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-config-data\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.719691 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-scripts\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.719839 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-run-httpd\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.720017 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd5fz\" (UniqueName: \"kubernetes.io/projected/ee1155cc-775a-4661-8660-d0a93d65d310-kube-api-access-jd5fz\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.720084 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.720188 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-log-httpd\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.720210 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.823017 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-scripts\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.823129 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-run-httpd\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.823197 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd5fz\" (UniqueName: \"kubernetes.io/projected/ee1155cc-775a-4661-8660-d0a93d65d310-kube-api-access-jd5fz\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.823413 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.823642 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-log-httpd\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.823740 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.823888 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-config-data\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.823924 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.824263 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-run-httpd\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.824354 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-log-httpd\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.829325 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-scripts\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.830078 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.831638 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.833052 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-config-data\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.844115 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.845660 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd5fz\" (UniqueName: \"kubernetes.io/projected/ee1155cc-775a-4661-8660-d0a93d65d310-kube-api-access-jd5fz\") pod \"ceilometer-0\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " pod="openstack/ceilometer-0" Jan 29 06:55:57 crc kubenswrapper[5017]: I0129 06:55:57.903518 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:55:58 crc kubenswrapper[5017]: I0129 06:55:58.332399 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cfbfdc-d831-48a6-9de9-5511bd7587d9" path="/var/lib/kubelet/pods/95cfbfdc-d831-48a6-9de9-5511bd7587d9/volumes" Jan 29 06:55:58 crc kubenswrapper[5017]: I0129 06:55:58.465196 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:55:59 crc kubenswrapper[5017]: I0129 06:55:59.448856 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerStarted","Data":"f5df0ad95a2bc715fcdfcf1da07cf60409fe8d296d26985c0ebab85e18369e31"} Jan 29 06:55:59 crc kubenswrapper[5017]: I0129 06:55:59.449308 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerStarted","Data":"9725c402a2a03d7cf7e2c7510b6f4de75bc9ece2e3ac85990068aa1461fd57c1"} Jan 29 06:55:59 crc kubenswrapper[5017]: I0129 06:55:59.559270 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 06:55:59 crc kubenswrapper[5017]: I0129 06:55:59.727361 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 06:56:00 crc kubenswrapper[5017]: I0129 06:56:00.462293 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerStarted","Data":"b3183cdac2bcb26bc7af396e0158ec0453a6c7087662e9e255e887d2b575f79b"} Jan 29 06:56:00 crc kubenswrapper[5017]: I0129 06:56:00.887861 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 06:56:00 crc kubenswrapper[5017]: I0129 06:56:00.888074 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 06:56:01 crc kubenswrapper[5017]: I0129 06:56:01.480632 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerStarted","Data":"a2e761db49a0ed3cc28797a2b81b08f9e810df10919444824175a515b667ef7c"} Jan 29 06:56:01 crc kubenswrapper[5017]: I0129 06:56:01.902201 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 06:56:01 crc kubenswrapper[5017]: I0129 06:56:01.902787 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 06:56:03 crc kubenswrapper[5017]: I0129 06:56:03.505627 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerStarted","Data":"d0d35d5ef69c176eb70e493c17b550641d74cc1bb2d06322c03c0e14dd8d17f0"} Jan 29 06:56:03 crc kubenswrapper[5017]: I0129 06:56:03.507431 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 06:56:03 crc kubenswrapper[5017]: I0129 06:56:03.532188 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.667226577 podStartE2EDuration="6.532164832s" podCreationTimestamp="2026-01-29 06:55:57 +0000 UTC" firstStartedPulling="2026-01-29 06:55:58.45823032 +0000 UTC m=+1244.832677950" lastFinishedPulling="2026-01-29 06:56:02.323168595 +0000 UTC m=+1248.697616205" observedRunningTime="2026-01-29 06:56:03.530348497 +0000 UTC m=+1249.904796107" watchObservedRunningTime="2026-01-29 06:56:03.532164832 +0000 UTC m=+1249.906612442" Jan 29 06:56:04 crc kubenswrapper[5017]: I0129 06:56:04.709171 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 06:56:04 crc kubenswrapper[5017]: I0129 06:56:04.727757 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 06:56:04 crc kubenswrapper[5017]: I0129 06:56:04.789235 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 06:56:05 crc kubenswrapper[5017]: I0129 06:56:05.564382 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 06:56:06 crc kubenswrapper[5017]: I0129 06:56:06.065326 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 06:56:06 crc kubenswrapper[5017]: I0129 06:56:06.065787 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 06:56:07 crc kubenswrapper[5017]: I0129 06:56:07.147189 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 06:56:07 crc kubenswrapper[5017]: I0129 06:56:07.147592 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 06:56:10 crc kubenswrapper[5017]: I0129 06:56:10.893792 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 06:56:10 crc kubenswrapper[5017]: I0129 06:56:10.894371 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 06:56:10 crc kubenswrapper[5017]: I0129 06:56:10.913755 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 06:56:10 crc kubenswrapper[5017]: I0129 06:56:10.919047 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.355688 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.477917 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwgzb\" (UniqueName: \"kubernetes.io/projected/a35e9ed1-19c4-4412-b252-ead2940ab008-kube-api-access-gwgzb\") pod \"a35e9ed1-19c4-4412-b252-ead2940ab008\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.478208 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-config-data\") pod \"a35e9ed1-19c4-4412-b252-ead2940ab008\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.478419 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-combined-ca-bundle\") pod \"a35e9ed1-19c4-4412-b252-ead2940ab008\" (UID: \"a35e9ed1-19c4-4412-b252-ead2940ab008\") " Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.509238 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35e9ed1-19c4-4412-b252-ead2940ab008-kube-api-access-gwgzb" (OuterVolumeSpecName: "kube-api-access-gwgzb") pod "a35e9ed1-19c4-4412-b252-ead2940ab008" (UID: "a35e9ed1-19c4-4412-b252-ead2940ab008"). InnerVolumeSpecName "kube-api-access-gwgzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.532018 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-config-data" (OuterVolumeSpecName: "config-data") pod "a35e9ed1-19c4-4412-b252-ead2940ab008" (UID: "a35e9ed1-19c4-4412-b252-ead2940ab008"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.549204 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a35e9ed1-19c4-4412-b252-ead2940ab008" (UID: "a35e9ed1-19c4-4412-b252-ead2940ab008"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.581129 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwgzb\" (UniqueName: \"kubernetes.io/projected/a35e9ed1-19c4-4412-b252-ead2940ab008-kube-api-access-gwgzb\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.581171 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.581185 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35e9ed1-19c4-4412-b252-ead2940ab008-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.626328 5017 generic.go:334] "Generic (PLEG): container finished" podID="a35e9ed1-19c4-4412-b252-ead2940ab008" containerID="e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97" exitCode=137 Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.626482 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a35e9ed1-19c4-4412-b252-ead2940ab008","Type":"ContainerDied","Data":"e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97"} Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.626569 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a35e9ed1-19c4-4412-b252-ead2940ab008","Type":"ContainerDied","Data":"a97be1554d63dbd52ca3430d9ee0807ee7a4bcb524f66bf42f0c993486424c4a"} Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.626598 5017 scope.go:117] "RemoveContainer" containerID="e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.626506 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.671690 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.684531 5017 scope.go:117] "RemoveContainer" containerID="e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97" Jan 29 06:56:12 crc kubenswrapper[5017]: E0129 06:56:12.698018 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97\": container with ID starting with e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97 not found: ID does not exist" containerID="e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.698080 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97"} err="failed to get container status \"e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97\": rpc error: code = NotFound desc = could not find container \"e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97\": container with ID starting with e826d215df7f61bd097804cd8ae657139975b1390a49984f2c777ac95ac4eb97 not found: ID does not exist" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.698140 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.706016 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:56:12 crc kubenswrapper[5017]: E0129 06:56:12.706450 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35e9ed1-19c4-4412-b252-ead2940ab008" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.706469 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35e9ed1-19c4-4412-b252-ead2940ab008" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.706704 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35e9ed1-19c4-4412-b252-ead2940ab008" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.710519 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.715517 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.715695 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.715844 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.724697 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.785131 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.785208 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.785257 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.785336 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfmjf\" (UniqueName: \"kubernetes.io/projected/73788076-4208-4f0f-8c66-95ef1bfb28b6-kube-api-access-vfmjf\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.785384 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.887448 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.887630 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfmjf\" (UniqueName: \"kubernetes.io/projected/73788076-4208-4f0f-8c66-95ef1bfb28b6-kube-api-access-vfmjf\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.887712 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.887794 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.887930 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.892190 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.892328 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.892368 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.893579 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:12 crc kubenswrapper[5017]: I0129 06:56:12.905819 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfmjf\" (UniqueName: \"kubernetes.io/projected/73788076-4208-4f0f-8c66-95ef1bfb28b6-kube-api-access-vfmjf\") pod \"nova-cell1-novncproxy-0\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:13 crc kubenswrapper[5017]: I0129 06:56:13.068338 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:13 crc kubenswrapper[5017]: I0129 06:56:13.547092 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:56:13 crc kubenswrapper[5017]: I0129 06:56:13.642532 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73788076-4208-4f0f-8c66-95ef1bfb28b6","Type":"ContainerStarted","Data":"57b80fd0e54fbbda0581b0e548a2d554f0ef7e7c85cb96f4a5eb0b9e73292dc9"} Jan 29 06:56:14 crc kubenswrapper[5017]: I0129 06:56:14.346923 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35e9ed1-19c4-4412-b252-ead2940ab008" path="/var/lib/kubelet/pods/a35e9ed1-19c4-4412-b252-ead2940ab008/volumes" Jan 29 06:56:14 crc kubenswrapper[5017]: I0129 06:56:14.658301 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73788076-4208-4f0f-8c66-95ef1bfb28b6","Type":"ContainerStarted","Data":"2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6"} Jan 29 06:56:14 crc kubenswrapper[5017]: I0129 06:56:14.692485 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.692465399 podStartE2EDuration="2.692465399s" podCreationTimestamp="2026-01-29 06:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:56:14.689765922 +0000 UTC m=+1261.064213572" watchObservedRunningTime="2026-01-29 06:56:14.692465399 +0000 UTC m=+1261.066913009" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.069449 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.071830 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.072125 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.074546 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.680458 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.687484 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.928419 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-cxtdk"] Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.930289 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.963486 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-cxtdk"] Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.996802 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.996897 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncf76\" (UniqueName: \"kubernetes.io/projected/c653a323-c9c2-42f2-a2af-125828234475-kube-api-access-ncf76\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.996945 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.997340 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-config\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.997410 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:16 crc kubenswrapper[5017]: I0129 06:56:16.997536 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.099818 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.099904 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncf76\" (UniqueName: \"kubernetes.io/projected/c653a323-c9c2-42f2-a2af-125828234475-kube-api-access-ncf76\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.099946 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.100042 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-config\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.100065 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.100641 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.101278 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.101304 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-config\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.101788 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.101811 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.102067 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.125401 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncf76\" (UniqueName: \"kubernetes.io/projected/c653a323-c9c2-42f2-a2af-125828234475-kube-api-access-ncf76\") pod \"dnsmasq-dns-fcd6f8f8f-cxtdk\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.264263 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:17 crc kubenswrapper[5017]: I0129 06:56:17.764664 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-cxtdk"] Jan 29 06:56:18 crc kubenswrapper[5017]: I0129 06:56:18.069424 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:18 crc kubenswrapper[5017]: I0129 06:56:18.736696 5017 generic.go:334] "Generic (PLEG): container finished" podID="c653a323-c9c2-42f2-a2af-125828234475" containerID="51ea0b6a63df696a6c8299801190aa0bb86841398e2ad16680fb32723e8af016" exitCode=0 Jan 29 06:56:18 crc kubenswrapper[5017]: I0129 06:56:18.736896 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" event={"ID":"c653a323-c9c2-42f2-a2af-125828234475","Type":"ContainerDied","Data":"51ea0b6a63df696a6c8299801190aa0bb86841398e2ad16680fb32723e8af016"} Jan 29 06:56:18 crc kubenswrapper[5017]: I0129 06:56:18.737289 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" event={"ID":"c653a323-c9c2-42f2-a2af-125828234475","Type":"ContainerStarted","Data":"05c50afbe8bcc22f3be698ed5b03d80360ec311ac61f2d7ac2210a0ba7051538"} Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.302087 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.302889 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="ceilometer-central-agent" containerID="cri-o://f5df0ad95a2bc715fcdfcf1da07cf60409fe8d296d26985c0ebab85e18369e31" gracePeriod=30 Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.303224 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="proxy-httpd" containerID="cri-o://d0d35d5ef69c176eb70e493c17b550641d74cc1bb2d06322c03c0e14dd8d17f0" gracePeriod=30 Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.303230 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="sg-core" containerID="cri-o://a2e761db49a0ed3cc28797a2b81b08f9e810df10919444824175a515b667ef7c" gracePeriod=30 Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.303205 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="ceilometer-notification-agent" containerID="cri-o://b3183cdac2bcb26bc7af396e0158ec0453a6c7087662e9e255e887d2b575f79b" gracePeriod=30 Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.406547 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": read tcp 10.217.0.2:45150->10.217.0.198:3000: read: connection reset by peer" Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.494653 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.750552 5017 generic.go:334] "Generic (PLEG): container finished" podID="ee1155cc-775a-4661-8660-d0a93d65d310" containerID="d0d35d5ef69c176eb70e493c17b550641d74cc1bb2d06322c03c0e14dd8d17f0" exitCode=0 Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.751045 5017 generic.go:334] "Generic (PLEG): container finished" podID="ee1155cc-775a-4661-8660-d0a93d65d310" containerID="a2e761db49a0ed3cc28797a2b81b08f9e810df10919444824175a515b667ef7c" exitCode=2 Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.750637 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerDied","Data":"d0d35d5ef69c176eb70e493c17b550641d74cc1bb2d06322c03c0e14dd8d17f0"} Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.751110 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerDied","Data":"a2e761db49a0ed3cc28797a2b81b08f9e810df10919444824175a515b667ef7c"} Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.754454 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" event={"ID":"c653a323-c9c2-42f2-a2af-125828234475","Type":"ContainerStarted","Data":"fa179775b1ca9fee80f8f1451a4e94f0855c85741c538bb2b667dddb2502f32c"} Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.754833 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.755584 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-api" containerID="cri-o://053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b" gracePeriod=30 Jan 29 06:56:19 crc kubenswrapper[5017]: I0129 06:56:19.755705 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-log" containerID="cri-o://c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8" gracePeriod=30 Jan 29 06:56:20 crc kubenswrapper[5017]: I0129 06:56:20.770005 5017 generic.go:334] "Generic (PLEG): container finished" podID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerID="c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8" exitCode=143 Jan 29 06:56:20 crc kubenswrapper[5017]: I0129 06:56:20.770084 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dca84366-8a11-423d-be6c-cf6bc5f5571a","Type":"ContainerDied","Data":"c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8"} Jan 29 06:56:20 crc kubenswrapper[5017]: I0129 06:56:20.772862 5017 generic.go:334] "Generic (PLEG): container finished" podID="ee1155cc-775a-4661-8660-d0a93d65d310" containerID="f5df0ad95a2bc715fcdfcf1da07cf60409fe8d296d26985c0ebab85e18369e31" exitCode=0 Jan 29 06:56:20 crc kubenswrapper[5017]: I0129 06:56:20.774071 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerDied","Data":"f5df0ad95a2bc715fcdfcf1da07cf60409fe8d296d26985c0ebab85e18369e31"} Jan 29 06:56:21 crc kubenswrapper[5017]: I0129 06:56:21.789155 5017 generic.go:334] "Generic (PLEG): container finished" podID="ee1155cc-775a-4661-8660-d0a93d65d310" containerID="b3183cdac2bcb26bc7af396e0158ec0453a6c7087662e9e255e887d2b575f79b" exitCode=0 Jan 29 06:56:21 crc kubenswrapper[5017]: I0129 06:56:21.789738 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerDied","Data":"b3183cdac2bcb26bc7af396e0158ec0453a6c7087662e9e255e887d2b575f79b"} Jan 29 06:56:21 crc kubenswrapper[5017]: I0129 06:56:21.943915 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:56:21 crc kubenswrapper[5017]: I0129 06:56:21.975583 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" podStartSLOduration=5.975547244 podStartE2EDuration="5.975547244s" podCreationTimestamp="2026-01-29 06:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:56:19.782091772 +0000 UTC m=+1266.156539382" watchObservedRunningTime="2026-01-29 06:56:21.975547244 +0000 UTC m=+1268.349994864" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.068074 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd5fz\" (UniqueName: \"kubernetes.io/projected/ee1155cc-775a-4661-8660-d0a93d65d310-kube-api-access-jd5fz\") pod \"ee1155cc-775a-4661-8660-d0a93d65d310\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.068617 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-sg-core-conf-yaml\") pod \"ee1155cc-775a-4661-8660-d0a93d65d310\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.068650 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-log-httpd\") pod \"ee1155cc-775a-4661-8660-d0a93d65d310\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.068672 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-combined-ca-bundle\") pod \"ee1155cc-775a-4661-8660-d0a93d65d310\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.068722 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-run-httpd\") pod \"ee1155cc-775a-4661-8660-d0a93d65d310\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.068840 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-scripts\") pod \"ee1155cc-775a-4661-8660-d0a93d65d310\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.068900 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-ceilometer-tls-certs\") pod \"ee1155cc-775a-4661-8660-d0a93d65d310\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.068935 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-config-data\") pod \"ee1155cc-775a-4661-8660-d0a93d65d310\" (UID: \"ee1155cc-775a-4661-8660-d0a93d65d310\") " Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.069466 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ee1155cc-775a-4661-8660-d0a93d65d310" (UID: "ee1155cc-775a-4661-8660-d0a93d65d310"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.069781 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ee1155cc-775a-4661-8660-d0a93d65d310" (UID: "ee1155cc-775a-4661-8660-d0a93d65d310"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.076718 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1155cc-775a-4661-8660-d0a93d65d310-kube-api-access-jd5fz" (OuterVolumeSpecName: "kube-api-access-jd5fz") pod "ee1155cc-775a-4661-8660-d0a93d65d310" (UID: "ee1155cc-775a-4661-8660-d0a93d65d310"). InnerVolumeSpecName "kube-api-access-jd5fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.078324 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-scripts" (OuterVolumeSpecName: "scripts") pod "ee1155cc-775a-4661-8660-d0a93d65d310" (UID: "ee1155cc-775a-4661-8660-d0a93d65d310"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.115474 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ee1155cc-775a-4661-8660-d0a93d65d310" (UID: "ee1155cc-775a-4661-8660-d0a93d65d310"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.135145 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ee1155cc-775a-4661-8660-d0a93d65d310" (UID: "ee1155cc-775a-4661-8660-d0a93d65d310"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.172384 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee1155cc-775a-4661-8660-d0a93d65d310" (UID: "ee1155cc-775a-4661-8660-d0a93d65d310"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.172464 5017 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.172523 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd5fz\" (UniqueName: \"kubernetes.io/projected/ee1155cc-775a-4661-8660-d0a93d65d310-kube-api-access-jd5fz\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.172560 5017 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.172590 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.172615 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee1155cc-775a-4661-8660-d0a93d65d310-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.172641 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.194171 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-config-data" (OuterVolumeSpecName: "config-data") pod "ee1155cc-775a-4661-8660-d0a93d65d310" (UID: "ee1155cc-775a-4661-8660-d0a93d65d310"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.274907 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.274947 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee1155cc-775a-4661-8660-d0a93d65d310-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.809327 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee1155cc-775a-4661-8660-d0a93d65d310","Type":"ContainerDied","Data":"9725c402a2a03d7cf7e2c7510b6f4de75bc9ece2e3ac85990068aa1461fd57c1"} Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.809411 5017 scope.go:117] "RemoveContainer" containerID="d0d35d5ef69c176eb70e493c17b550641d74cc1bb2d06322c03c0e14dd8d17f0" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.809489 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.843243 5017 scope.go:117] "RemoveContainer" containerID="a2e761db49a0ed3cc28797a2b81b08f9e810df10919444824175a515b667ef7c" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.843467 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.895326 5017 scope.go:117] "RemoveContainer" containerID="b3183cdac2bcb26bc7af396e0158ec0453a6c7087662e9e255e887d2b575f79b" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.895604 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.931874 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:56:22 crc kubenswrapper[5017]: E0129 06:56:22.932447 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="ceilometer-notification-agent" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.932471 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="ceilometer-notification-agent" Jan 29 06:56:22 crc kubenswrapper[5017]: E0129 06:56:22.932495 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="sg-core" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.932504 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="sg-core" Jan 29 06:56:22 crc kubenswrapper[5017]: E0129 06:56:22.932524 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="proxy-httpd" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.932531 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="proxy-httpd" Jan 29 06:56:22 crc kubenswrapper[5017]: E0129 06:56:22.932546 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="ceilometer-central-agent" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.932552 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="ceilometer-central-agent" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.932864 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="ceilometer-notification-agent" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.932901 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="proxy-httpd" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.932883 5017 scope.go:117] "RemoveContainer" containerID="f5df0ad95a2bc715fcdfcf1da07cf60409fe8d296d26985c0ebab85e18369e31" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.932911 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="ceilometer-central-agent" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.933124 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" containerName="sg-core" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.935706 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.938243 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.938343 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.940578 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 06:56:22 crc kubenswrapper[5017]: I0129 06:56:22.944042 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.069716 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.088873 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.104297 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-scripts\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.104372 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.104400 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-log-httpd\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.104428 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-run-httpd\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.104478 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr74p\" (UniqueName: \"kubernetes.io/projected/3131ebc7-0955-4d4d-8444-057df1cc52f1-kube-api-access-rr74p\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.104517 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.104573 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-config-data\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.104603 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.207003 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-run-httpd\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.207119 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr74p\" (UniqueName: \"kubernetes.io/projected/3131ebc7-0955-4d4d-8444-057df1cc52f1-kube-api-access-rr74p\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.207186 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.207271 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-config-data\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.207319 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.207374 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-scripts\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.207426 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.207455 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-log-httpd\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.208924 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-log-httpd\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.209258 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-run-httpd\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.214664 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.215218 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.216153 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.218304 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-config-data\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.223300 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-scripts\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.226489 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr74p\" (UniqueName: \"kubernetes.io/projected/3131ebc7-0955-4d4d-8444-057df1cc52f1-kube-api-access-rr74p\") pod \"ceilometer-0\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.353910 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.374118 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.516332 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpk9m\" (UniqueName: \"kubernetes.io/projected/dca84366-8a11-423d-be6c-cf6bc5f5571a-kube-api-access-wpk9m\") pod \"dca84366-8a11-423d-be6c-cf6bc5f5571a\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.516864 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca84366-8a11-423d-be6c-cf6bc5f5571a-logs\") pod \"dca84366-8a11-423d-be6c-cf6bc5f5571a\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.516982 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-combined-ca-bundle\") pod \"dca84366-8a11-423d-be6c-cf6bc5f5571a\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.517090 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-config-data\") pod \"dca84366-8a11-423d-be6c-cf6bc5f5571a\" (UID: \"dca84366-8a11-423d-be6c-cf6bc5f5571a\") " Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.517854 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dca84366-8a11-423d-be6c-cf6bc5f5571a-logs" (OuterVolumeSpecName: "logs") pod "dca84366-8a11-423d-be6c-cf6bc5f5571a" (UID: "dca84366-8a11-423d-be6c-cf6bc5f5571a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.521945 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca84366-8a11-423d-be6c-cf6bc5f5571a-kube-api-access-wpk9m" (OuterVolumeSpecName: "kube-api-access-wpk9m") pod "dca84366-8a11-423d-be6c-cf6bc5f5571a" (UID: "dca84366-8a11-423d-be6c-cf6bc5f5571a"). InnerVolumeSpecName "kube-api-access-wpk9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.566123 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dca84366-8a11-423d-be6c-cf6bc5f5571a" (UID: "dca84366-8a11-423d-be6c-cf6bc5f5571a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.566606 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-config-data" (OuterVolumeSpecName: "config-data") pod "dca84366-8a11-423d-be6c-cf6bc5f5571a" (UID: "dca84366-8a11-423d-be6c-cf6bc5f5571a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.621923 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dca84366-8a11-423d-be6c-cf6bc5f5571a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.622040 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.622061 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca84366-8a11-423d-be6c-cf6bc5f5571a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.622073 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpk9m\" (UniqueName: \"kubernetes.io/projected/dca84366-8a11-423d-be6c-cf6bc5f5571a-kube-api-access-wpk9m\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.823028 5017 generic.go:334] "Generic (PLEG): container finished" podID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerID="053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b" exitCode=0 Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.823115 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.823165 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dca84366-8a11-423d-be6c-cf6bc5f5571a","Type":"ContainerDied","Data":"053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b"} Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.823376 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dca84366-8a11-423d-be6c-cf6bc5f5571a","Type":"ContainerDied","Data":"c6243ef1ab891074d23f34fa1ae4f1ca8f14734527521a5b7a6d1f8e6a1f1f8e"} Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.823406 5017 scope.go:117] "RemoveContainer" containerID="053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.859008 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.877950 5017 scope.go:117] "RemoveContainer" containerID="c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.889041 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.901401 5017 scope.go:117] "RemoveContainer" containerID="053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b" Jan 29 06:56:23 crc kubenswrapper[5017]: E0129 06:56:23.903922 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b\": container with ID starting with 053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b not found: ID does not exist" containerID="053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.904069 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b"} err="failed to get container status \"053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b\": rpc error: code = NotFound desc = could not find container \"053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b\": container with ID starting with 053fbc3194f0c06033de2afa851811a4a8eb68b697a24714956a72f8da23468b not found: ID does not exist" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.904171 5017 scope.go:117] "RemoveContainer" containerID="c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8" Jan 29 06:56:23 crc kubenswrapper[5017]: E0129 06:56:23.904668 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8\": container with ID starting with c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8 not found: ID does not exist" containerID="c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.904716 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8"} err="failed to get container status \"c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8\": rpc error: code = NotFound desc = could not find container \"c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8\": container with ID starting with c8de210068c78e26e4e9a016d3de2f0bc2fc2ea45d8c2f92ec62acf8c9b594f8 not found: ID does not exist" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.913104 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.928272 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:23 crc kubenswrapper[5017]: E0129 06:56:23.928779 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-log" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.928797 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-log" Jan 29 06:56:23 crc kubenswrapper[5017]: E0129 06:56:23.928829 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-api" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.928835 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-api" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.929043 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-log" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.929059 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" containerName="nova-api-api" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.930229 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.940756 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.941207 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.941379 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.966375 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:23 crc kubenswrapper[5017]: I0129 06:56:23.977384 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.030063 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f983cf5e-8dae-462e-a0ba-719cf9ae229a-logs\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.030119 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.030198 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-public-tls-certs\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.030313 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxsfv\" (UniqueName: \"kubernetes.io/projected/f983cf5e-8dae-462e-a0ba-719cf9ae229a-kube-api-access-zxsfv\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.030342 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.030369 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-config-data\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.131991 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxsfv\" (UniqueName: \"kubernetes.io/projected/f983cf5e-8dae-462e-a0ba-719cf9ae229a-kube-api-access-zxsfv\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.132041 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.132067 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-config-data\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.132115 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f983cf5e-8dae-462e-a0ba-719cf9ae229a-logs\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.132141 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.132223 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-public-tls-certs\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.133470 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f983cf5e-8dae-462e-a0ba-719cf9ae229a-logs\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.140468 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.141924 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-config-data\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.142763 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.150020 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-s5ffm"] Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.151990 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.155690 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.155872 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.156089 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-public-tls-certs\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.156693 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxsfv\" (UniqueName: \"kubernetes.io/projected/f983cf5e-8dae-462e-a0ba-719cf9ae229a-kube-api-access-zxsfv\") pod \"nova-api-0\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.179111 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-s5ffm"] Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.294430 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.334920 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca84366-8a11-423d-be6c-cf6bc5f5571a" path="/var/lib/kubelet/pods/dca84366-8a11-423d-be6c-cf6bc5f5571a/volumes" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.335680 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee1155cc-775a-4661-8660-d0a93d65d310" path="/var/lib/kubelet/pods/ee1155cc-775a-4661-8660-d0a93d65d310/volumes" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.338228 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-config-data\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.338604 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.338662 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-scripts\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.338686 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwdsd\" (UniqueName: \"kubernetes.io/projected/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-kube-api-access-lwdsd\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.441479 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.442163 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-scripts\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.442218 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwdsd\" (UniqueName: \"kubernetes.io/projected/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-kube-api-access-lwdsd\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.446613 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.447752 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-config-data\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.450327 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-scripts\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.454009 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-config-data\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.469734 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwdsd\" (UniqueName: \"kubernetes.io/projected/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-kube-api-access-lwdsd\") pod \"nova-cell1-cell-mapping-s5ffm\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.573693 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.842062 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.846164 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerStarted","Data":"527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f"} Jan 29 06:56:24 crc kubenswrapper[5017]: I0129 06:56:24.846225 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerStarted","Data":"ead5beca6ebc7ac46fa200b641bfd0dcaf21d1f5f74978a5b1d66aa82190d1b9"} Jan 29 06:56:24 crc kubenswrapper[5017]: W0129 06:56:24.851843 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf983cf5e_8dae_462e_a0ba_719cf9ae229a.slice/crio-baa58309ecead7c4d6fae1c0c6795b98356242572f128485e656b756d078f730 WatchSource:0}: Error finding container baa58309ecead7c4d6fae1c0c6795b98356242572f128485e656b756d078f730: Status 404 returned error can't find the container with id baa58309ecead7c4d6fae1c0c6795b98356242572f128485e656b756d078f730 Jan 29 06:56:25 crc kubenswrapper[5017]: I0129 06:56:25.107381 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-s5ffm"] Jan 29 06:56:25 crc kubenswrapper[5017]: W0129 06:56:25.123765 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cccdb9f_b531_4534_8adc_6a64d16dd3fe.slice/crio-f12b3c8156045ff9e56c3fbefd7090b795cd6b5eaf8a095bbb8363e07b980f95 WatchSource:0}: Error finding container f12b3c8156045ff9e56c3fbefd7090b795cd6b5eaf8a095bbb8363e07b980f95: Status 404 returned error can't find the container with id f12b3c8156045ff9e56c3fbefd7090b795cd6b5eaf8a095bbb8363e07b980f95 Jan 29 06:56:25 crc kubenswrapper[5017]: I0129 06:56:25.861814 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f983cf5e-8dae-462e-a0ba-719cf9ae229a","Type":"ContainerStarted","Data":"55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700"} Jan 29 06:56:25 crc kubenswrapper[5017]: I0129 06:56:25.862217 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f983cf5e-8dae-462e-a0ba-719cf9ae229a","Type":"ContainerStarted","Data":"bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c"} Jan 29 06:56:25 crc kubenswrapper[5017]: I0129 06:56:25.862233 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f983cf5e-8dae-462e-a0ba-719cf9ae229a","Type":"ContainerStarted","Data":"baa58309ecead7c4d6fae1c0c6795b98356242572f128485e656b756d078f730"} Jan 29 06:56:25 crc kubenswrapper[5017]: I0129 06:56:25.864426 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s5ffm" event={"ID":"8cccdb9f-b531-4534-8adc-6a64d16dd3fe","Type":"ContainerStarted","Data":"dbd1a320619a408df31957da8f6a7e77196f94d06e34b7ba9e869e93506c7d1f"} Jan 29 06:56:25 crc kubenswrapper[5017]: I0129 06:56:25.864534 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s5ffm" event={"ID":"8cccdb9f-b531-4534-8adc-6a64d16dd3fe","Type":"ContainerStarted","Data":"f12b3c8156045ff9e56c3fbefd7090b795cd6b5eaf8a095bbb8363e07b980f95"} Jan 29 06:56:25 crc kubenswrapper[5017]: I0129 06:56:25.920865 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.920841342 podStartE2EDuration="2.920841342s" podCreationTimestamp="2026-01-29 06:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:56:25.898656472 +0000 UTC m=+1272.273104092" watchObservedRunningTime="2026-01-29 06:56:25.920841342 +0000 UTC m=+1272.295288952" Jan 29 06:56:25 crc kubenswrapper[5017]: I0129 06:56:25.929439 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-s5ffm" podStartSLOduration=1.929399364 podStartE2EDuration="1.929399364s" podCreationTimestamp="2026-01-29 06:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:56:25.91754714 +0000 UTC m=+1272.291994750" watchObservedRunningTime="2026-01-29 06:56:25.929399364 +0000 UTC m=+1272.303846974" Jan 29 06:56:26 crc kubenswrapper[5017]: I0129 06:56:26.884212 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerStarted","Data":"75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2"} Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.266264 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.365647 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-mxw5x"] Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.369536 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" podUID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" containerName="dnsmasq-dns" containerID="cri-o://f85c4459d8f828738428873cf981d75378869b406a3bc14f9884b979c0dfee7a" gracePeriod=10 Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.584205 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" podUID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: connect: connection refused" Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.933284 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerStarted","Data":"be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4"} Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.936176 5017 generic.go:334] "Generic (PLEG): container finished" podID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" containerID="f85c4459d8f828738428873cf981d75378869b406a3bc14f9884b979c0dfee7a" exitCode=0 Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.936238 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" event={"ID":"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5","Type":"ContainerDied","Data":"f85c4459d8f828738428873cf981d75378869b406a3bc14f9884b979c0dfee7a"} Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.936276 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" event={"ID":"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5","Type":"ContainerDied","Data":"084edd2428805a9da0c644dd23701ca3f851eced4bc2784001abfe44ccb9b0d9"} Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.936293 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="084edd2428805a9da0c644dd23701ca3f851eced4bc2784001abfe44ccb9b0d9" Jan 29 06:56:27 crc kubenswrapper[5017]: I0129 06:56:27.975833 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.139498 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-config\") pod \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.141228 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-nb\") pod \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.141408 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-sb\") pod \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.141447 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-svc\") pod \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.141507 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-swift-storage-0\") pod \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.141636 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcggq\" (UniqueName: \"kubernetes.io/projected/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-kube-api-access-gcggq\") pod \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\" (UID: \"ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5\") " Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.148540 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-kube-api-access-gcggq" (OuterVolumeSpecName: "kube-api-access-gcggq") pod "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" (UID: "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5"). InnerVolumeSpecName "kube-api-access-gcggq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.198044 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" (UID: "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.204742 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-config" (OuterVolumeSpecName: "config") pod "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" (UID: "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.205004 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" (UID: "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.205207 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" (UID: "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.236826 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" (UID: "ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.244356 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.244394 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.244407 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.244415 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.244424 5017 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.244434 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcggq\" (UniqueName: \"kubernetes.io/projected/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5-kube-api-access-gcggq\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.950456 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerStarted","Data":"61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3"} Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.951000 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.950548 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-mxw5x" Jan 29 06:56:28 crc kubenswrapper[5017]: I0129 06:56:28.998616 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.383783297 podStartE2EDuration="6.998587029s" podCreationTimestamp="2026-01-29 06:56:22 +0000 UTC" firstStartedPulling="2026-01-29 06:56:23.931000852 +0000 UTC m=+1270.305448462" lastFinishedPulling="2026-01-29 06:56:28.545804554 +0000 UTC m=+1274.920252194" observedRunningTime="2026-01-29 06:56:28.974674677 +0000 UTC m=+1275.349122297" watchObservedRunningTime="2026-01-29 06:56:28.998587029 +0000 UTC m=+1275.373034629" Jan 29 06:56:29 crc kubenswrapper[5017]: I0129 06:56:29.010519 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-mxw5x"] Jan 29 06:56:29 crc kubenswrapper[5017]: I0129 06:56:29.024274 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-mxw5x"] Jan 29 06:56:30 crc kubenswrapper[5017]: I0129 06:56:30.330075 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" path="/var/lib/kubelet/pods/ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5/volumes" Jan 29 06:56:30 crc kubenswrapper[5017]: I0129 06:56:30.979734 5017 generic.go:334] "Generic (PLEG): container finished" podID="8cccdb9f-b531-4534-8adc-6a64d16dd3fe" containerID="dbd1a320619a408df31957da8f6a7e77196f94d06e34b7ba9e869e93506c7d1f" exitCode=0 Jan 29 06:56:30 crc kubenswrapper[5017]: I0129 06:56:30.979801 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s5ffm" event={"ID":"8cccdb9f-b531-4534-8adc-6a64d16dd3fe","Type":"ContainerDied","Data":"dbd1a320619a408df31957da8f6a7e77196f94d06e34b7ba9e869e93506c7d1f"} Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.524130 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.713133 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-config-data\") pod \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.713673 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-scripts\") pod \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.713746 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwdsd\" (UniqueName: \"kubernetes.io/projected/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-kube-api-access-lwdsd\") pod \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.714050 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-combined-ca-bundle\") pod \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\" (UID: \"8cccdb9f-b531-4534-8adc-6a64d16dd3fe\") " Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.721038 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-scripts" (OuterVolumeSpecName: "scripts") pod "8cccdb9f-b531-4534-8adc-6a64d16dd3fe" (UID: "8cccdb9f-b531-4534-8adc-6a64d16dd3fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.721883 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-kube-api-access-lwdsd" (OuterVolumeSpecName: "kube-api-access-lwdsd") pod "8cccdb9f-b531-4534-8adc-6a64d16dd3fe" (UID: "8cccdb9f-b531-4534-8adc-6a64d16dd3fe"). InnerVolumeSpecName "kube-api-access-lwdsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.750304 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cccdb9f-b531-4534-8adc-6a64d16dd3fe" (UID: "8cccdb9f-b531-4534-8adc-6a64d16dd3fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.751313 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-config-data" (OuterVolumeSpecName: "config-data") pod "8cccdb9f-b531-4534-8adc-6a64d16dd3fe" (UID: "8cccdb9f-b531-4534-8adc-6a64d16dd3fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.815944 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.815996 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwdsd\" (UniqueName: \"kubernetes.io/projected/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-kube-api-access-lwdsd\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.816015 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:32 crc kubenswrapper[5017]: I0129 06:56:32.816025 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cccdb9f-b531-4534-8adc-6a64d16dd3fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.005839 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s5ffm" event={"ID":"8cccdb9f-b531-4534-8adc-6a64d16dd3fe","Type":"ContainerDied","Data":"f12b3c8156045ff9e56c3fbefd7090b795cd6b5eaf8a095bbb8363e07b980f95"} Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.005900 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f12b3c8156045ff9e56c3fbefd7090b795cd6b5eaf8a095bbb8363e07b980f95" Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.006131 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s5ffm" Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.224113 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.224419 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerName="nova-api-log" containerID="cri-o://bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c" gracePeriod=30 Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.225007 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerName="nova-api-api" containerID="cri-o://55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700" gracePeriod=30 Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.254991 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.255376 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="89fe4210-0650-40a1-a2f9-8d0a9ca9a640" containerName="nova-scheduler-scheduler" containerID="cri-o://67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303" gracePeriod=30 Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.271202 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.271497 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-log" containerID="cri-o://0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d" gracePeriod=30 Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.272188 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-metadata" containerID="cri-o://379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf" gracePeriod=30 Jan 29 06:56:33 crc kubenswrapper[5017]: I0129 06:56:33.853567 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.017623 5017 generic.go:334] "Generic (PLEG): container finished" podID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerID="0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d" exitCode=143 Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.017697 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d","Type":"ContainerDied","Data":"0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d"} Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.020187 5017 generic.go:334] "Generic (PLEG): container finished" podID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerID="55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700" exitCode=0 Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.020225 5017 generic.go:334] "Generic (PLEG): container finished" podID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerID="bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c" exitCode=143 Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.020252 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f983cf5e-8dae-462e-a0ba-719cf9ae229a","Type":"ContainerDied","Data":"55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700"} Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.020268 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.020287 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f983cf5e-8dae-462e-a0ba-719cf9ae229a","Type":"ContainerDied","Data":"bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c"} Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.020303 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f983cf5e-8dae-462e-a0ba-719cf9ae229a","Type":"ContainerDied","Data":"baa58309ecead7c4d6fae1c0c6795b98356242572f128485e656b756d078f730"} Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.020325 5017 scope.go:117] "RemoveContainer" containerID="55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.042499 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-public-tls-certs\") pod \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.042661 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxsfv\" (UniqueName: \"kubernetes.io/projected/f983cf5e-8dae-462e-a0ba-719cf9ae229a-kube-api-access-zxsfv\") pod \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.042698 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-internal-tls-certs\") pod \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.042759 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f983cf5e-8dae-462e-a0ba-719cf9ae229a-logs\") pod \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.042791 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-config-data\") pod \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.043428 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f983cf5e-8dae-462e-a0ba-719cf9ae229a-logs" (OuterVolumeSpecName: "logs") pod "f983cf5e-8dae-462e-a0ba-719cf9ae229a" (UID: "f983cf5e-8dae-462e-a0ba-719cf9ae229a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.042850 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-combined-ca-bundle\") pod \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\" (UID: \"f983cf5e-8dae-462e-a0ba-719cf9ae229a\") " Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.049118 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f983cf5e-8dae-462e-a0ba-719cf9ae229a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.050136 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f983cf5e-8dae-462e-a0ba-719cf9ae229a-kube-api-access-zxsfv" (OuterVolumeSpecName: "kube-api-access-zxsfv") pod "f983cf5e-8dae-462e-a0ba-719cf9ae229a" (UID: "f983cf5e-8dae-462e-a0ba-719cf9ae229a"). InnerVolumeSpecName "kube-api-access-zxsfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.070871 5017 scope.go:117] "RemoveContainer" containerID="bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.096149 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-config-data" (OuterVolumeSpecName: "config-data") pod "f983cf5e-8dae-462e-a0ba-719cf9ae229a" (UID: "f983cf5e-8dae-462e-a0ba-719cf9ae229a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.104062 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f983cf5e-8dae-462e-a0ba-719cf9ae229a" (UID: "f983cf5e-8dae-462e-a0ba-719cf9ae229a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.113029 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f983cf5e-8dae-462e-a0ba-719cf9ae229a" (UID: "f983cf5e-8dae-462e-a0ba-719cf9ae229a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.137931 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f983cf5e-8dae-462e-a0ba-719cf9ae229a" (UID: "f983cf5e-8dae-462e-a0ba-719cf9ae229a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.151127 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.151171 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxsfv\" (UniqueName: \"kubernetes.io/projected/f983cf5e-8dae-462e-a0ba-719cf9ae229a-kube-api-access-zxsfv\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.151189 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.151201 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.151219 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f983cf5e-8dae-462e-a0ba-719cf9ae229a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.183267 5017 scope.go:117] "RemoveContainer" containerID="55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700" Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.184216 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700\": container with ID starting with 55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700 not found: ID does not exist" containerID="55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.184283 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700"} err="failed to get container status \"55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700\": rpc error: code = NotFound desc = could not find container \"55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700\": container with ID starting with 55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700 not found: ID does not exist" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.184320 5017 scope.go:117] "RemoveContainer" containerID="bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c" Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.184668 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c\": container with ID starting with bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c not found: ID does not exist" containerID="bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.184701 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c"} err="failed to get container status \"bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c\": rpc error: code = NotFound desc = could not find container \"bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c\": container with ID starting with bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c not found: ID does not exist" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.184721 5017 scope.go:117] "RemoveContainer" containerID="55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.185161 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700"} err="failed to get container status \"55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700\": rpc error: code = NotFound desc = could not find container \"55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700\": container with ID starting with 55e8e4d128f915860c96e11349c0680492ba3bdf216b37fc306c922540141700 not found: ID does not exist" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.185190 5017 scope.go:117] "RemoveContainer" containerID="bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.185792 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c"} err="failed to get container status \"bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c\": rpc error: code = NotFound desc = could not find container \"bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c\": container with ID starting with bd9480fc01f00485f82bb9b58fd8e9b5559c5ceaa2b5270be08f486ecaf54b3c not found: ID does not exist" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.403302 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.429837 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.450872 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.451502 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" containerName="dnsmasq-dns" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.451518 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" containerName="dnsmasq-dns" Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.451529 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerName="nova-api-log" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.451537 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerName="nova-api-log" Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.451555 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerName="nova-api-api" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.451561 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerName="nova-api-api" Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.451567 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cccdb9f-b531-4534-8adc-6a64d16dd3fe" containerName="nova-manage" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.451575 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cccdb9f-b531-4534-8adc-6a64d16dd3fe" containerName="nova-manage" Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.451588 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" containerName="init" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.451594 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" containerName="init" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.451782 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerName="nova-api-log" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.451793 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" containerName="nova-api-api" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.451803 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae1ed6fd-f44a-4c4a-a573-9ee16b9181f5" containerName="dnsmasq-dns" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.451814 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cccdb9f-b531-4534-8adc-6a64d16dd3fe" containerName="nova-manage" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.452779 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.452873 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.472701 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.472921 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.473047 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.577492 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2zt\" (UniqueName: \"kubernetes.io/projected/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-kube-api-access-pm2zt\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.577551 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.577633 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-public-tls-certs\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.577767 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.578094 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-config-data\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.578136 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-logs\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.680822 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-public-tls-certs\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.680916 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.681022 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-config-data\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.681053 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-logs\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.681132 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2zt\" (UniqueName: \"kubernetes.io/projected/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-kube-api-access-pm2zt\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.681166 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.684257 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-logs\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.688897 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-config-data\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.689708 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.690611 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.705701 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-public-tls-certs\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.712215 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2zt\" (UniqueName: \"kubernetes.io/projected/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-kube-api-access-pm2zt\") pod \"nova-api-0\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.732912 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303 is running failed: container process not found" containerID="67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.733394 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303 is running failed: container process not found" containerID="67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.734001 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303 is running failed: container process not found" containerID="67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:56:34 crc kubenswrapper[5017]: E0129 06:56:34.734111 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="89fe4210-0650-40a1-a2f9-8d0a9ca9a640" containerName="nova-scheduler-scheduler" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.831488 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.896949 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.987521 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-config-data\") pod \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.987598 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dn4q\" (UniqueName: \"kubernetes.io/projected/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-kube-api-access-9dn4q\") pod \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.987629 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-combined-ca-bundle\") pod \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\" (UID: \"89fe4210-0650-40a1-a2f9-8d0a9ca9a640\") " Jan 29 06:56:34 crc kubenswrapper[5017]: I0129 06:56:34.994482 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-kube-api-access-9dn4q" (OuterVolumeSpecName: "kube-api-access-9dn4q") pod "89fe4210-0650-40a1-a2f9-8d0a9ca9a640" (UID: "89fe4210-0650-40a1-a2f9-8d0a9ca9a640"). InnerVolumeSpecName "kube-api-access-9dn4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.020292 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89fe4210-0650-40a1-a2f9-8d0a9ca9a640" (UID: "89fe4210-0650-40a1-a2f9-8d0a9ca9a640"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.037842 5017 generic.go:334] "Generic (PLEG): container finished" podID="89fe4210-0650-40a1-a2f9-8d0a9ca9a640" containerID="67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303" exitCode=0 Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.037975 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89fe4210-0650-40a1-a2f9-8d0a9ca9a640","Type":"ContainerDied","Data":"67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303"} Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.038022 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89fe4210-0650-40a1-a2f9-8d0a9ca9a640","Type":"ContainerDied","Data":"189a9ab1da3fccb8852185d4a52490c745dfcf1e243a9c346c3c79c2e2aa4c9a"} Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.038015 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.038155 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-config-data" (OuterVolumeSpecName: "config-data") pod "89fe4210-0650-40a1-a2f9-8d0a9ca9a640" (UID: "89fe4210-0650-40a1-a2f9-8d0a9ca9a640"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.038272 5017 scope.go:117] "RemoveContainer" containerID="67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.080130 5017 scope.go:117] "RemoveContainer" containerID="67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303" Jan 29 06:56:35 crc kubenswrapper[5017]: E0129 06:56:35.080610 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303\": container with ID starting with 67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303 not found: ID does not exist" containerID="67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.080644 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303"} err="failed to get container status \"67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303\": rpc error: code = NotFound desc = could not find container \"67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303\": container with ID starting with 67a1b04ae715a85f2b8b8d57eb51d423fc9ee3444f2930d0a95e2ac8b8505303 not found: ID does not exist" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.090657 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dn4q\" (UniqueName: \"kubernetes.io/projected/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-kube-api-access-9dn4q\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.090699 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.090717 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fe4210-0650-40a1-a2f9-8d0a9ca9a640-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:35 crc kubenswrapper[5017]: W0129 06:56:35.353270 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d3e4a4d_ee9a_4345_b8e5_a40416771caf.slice/crio-5abd7c85607df012d69d67cfc19e316c675dcdebf6951845245b25e10b7e6c24 WatchSource:0}: Error finding container 5abd7c85607df012d69d67cfc19e316c675dcdebf6951845245b25e10b7e6c24: Status 404 returned error can't find the container with id 5abd7c85607df012d69d67cfc19e316c675dcdebf6951845245b25e10b7e6c24 Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.361018 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.386749 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.397933 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.445902 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:56:35 crc kubenswrapper[5017]: E0129 06:56:35.446528 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fe4210-0650-40a1-a2f9-8d0a9ca9a640" containerName="nova-scheduler-scheduler" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.446552 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fe4210-0650-40a1-a2f9-8d0a9ca9a640" containerName="nova-scheduler-scheduler" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.446830 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fe4210-0650-40a1-a2f9-8d0a9ca9a640" containerName="nova-scheduler-scheduler" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.447641 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.451424 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.477400 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.504928 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjktv\" (UniqueName: \"kubernetes.io/projected/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-kube-api-access-pjktv\") pod \"nova-scheduler-0\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.505095 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-config-data\") pod \"nova-scheduler-0\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.505218 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.621502 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.621674 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjktv\" (UniqueName: \"kubernetes.io/projected/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-kube-api-access-pjktv\") pod \"nova-scheduler-0\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.621746 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-config-data\") pod \"nova-scheduler-0\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.655762 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjktv\" (UniqueName: \"kubernetes.io/projected/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-kube-api-access-pjktv\") pod \"nova-scheduler-0\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.657602 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.661547 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-config-data\") pod \"nova-scheduler-0\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " pod="openstack/nova-scheduler-0" Jan 29 06:56:35 crc kubenswrapper[5017]: I0129 06:56:35.769692 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:56:36 crc kubenswrapper[5017]: I0129 06:56:36.055142 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3e4a4d-ee9a-4345-b8e5-a40416771caf","Type":"ContainerStarted","Data":"e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b"} Jan 29 06:56:36 crc kubenswrapper[5017]: I0129 06:56:36.056099 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3e4a4d-ee9a-4345-b8e5-a40416771caf","Type":"ContainerStarted","Data":"5abd7c85607df012d69d67cfc19e316c675dcdebf6951845245b25e10b7e6c24"} Jan 29 06:56:36 crc kubenswrapper[5017]: I0129 06:56:36.243913 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:56:36 crc kubenswrapper[5017]: W0129 06:56:36.249384 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda909c2d3_90a4_41a0_af8e_ddb69ed4f41b.slice/crio-7e9d4e28a02dcd146028c0f354e250f5df4d372a93c4db9c7289dff15a10dc46 WatchSource:0}: Error finding container 7e9d4e28a02dcd146028c0f354e250f5df4d372a93c4db9c7289dff15a10dc46: Status 404 returned error can't find the container with id 7e9d4e28a02dcd146028c0f354e250f5df4d372a93c4db9c7289dff15a10dc46 Jan 29 06:56:36 crc kubenswrapper[5017]: I0129 06:56:36.391627 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fe4210-0650-40a1-a2f9-8d0a9ca9a640" path="/var/lib/kubelet/pods/89fe4210-0650-40a1-a2f9-8d0a9ca9a640/volumes" Jan 29 06:56:36 crc kubenswrapper[5017]: I0129 06:56:36.393172 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f983cf5e-8dae-462e-a0ba-719cf9ae229a" path="/var/lib/kubelet/pods/f983cf5e-8dae-462e-a0ba-719cf9ae229a/volumes" Jan 29 06:56:36 crc kubenswrapper[5017]: I0129 06:56:36.404591 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:37130->10.217.0.194:8775: read: connection reset by peer" Jan 29 06:56:36 crc kubenswrapper[5017]: I0129 06:56:36.404626 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:37126->10.217.0.194:8775: read: connection reset by peer" Jan 29 06:56:36 crc kubenswrapper[5017]: I0129 06:56:36.976100 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.065527 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-config-data\") pod \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.065659 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-combined-ca-bundle\") pod \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.065766 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-nova-metadata-tls-certs\") pod \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.065801 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-logs\") pod \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.065902 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4g8t\" (UniqueName: \"kubernetes.io/projected/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-kube-api-access-v4g8t\") pod \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\" (UID: \"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d\") " Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.066596 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-logs" (OuterVolumeSpecName: "logs") pod "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" (UID: "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.074951 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-kube-api-access-v4g8t" (OuterVolumeSpecName: "kube-api-access-v4g8t") pod "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" (UID: "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d"). InnerVolumeSpecName "kube-api-access-v4g8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.076260 5017 generic.go:334] "Generic (PLEG): container finished" podID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerID="379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf" exitCode=0 Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.076344 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d","Type":"ContainerDied","Data":"379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf"} Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.076385 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4f35fce-e9ad-4cf2-8944-638bb6d8df3d","Type":"ContainerDied","Data":"d0f385ddf1c2ee94db2b4863735a4f9a4514582de971fab42f498d62d8dda2d7"} Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.076406 5017 scope.go:117] "RemoveContainer" containerID="379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.076558 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.094659 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3e4a4d-ee9a-4345-b8e5-a40416771caf","Type":"ContainerStarted","Data":"2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b"} Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.104628 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b","Type":"ContainerStarted","Data":"baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc"} Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.104689 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b","Type":"ContainerStarted","Data":"7e9d4e28a02dcd146028c0f354e250f5df4d372a93c4db9c7289dff15a10dc46"} Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.106847 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-config-data" (OuterVolumeSpecName: "config-data") pod "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" (UID: "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.107055 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" (UID: "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.162316 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.162285099 podStartE2EDuration="3.162285099s" podCreationTimestamp="2026-01-29 06:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:56:37.139491993 +0000 UTC m=+1283.513939603" watchObservedRunningTime="2026-01-29 06:56:37.162285099 +0000 UTC m=+1283.536732709" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.165574 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.165561669 podStartE2EDuration="2.165561669s" podCreationTimestamp="2026-01-29 06:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:56:37.163374706 +0000 UTC m=+1283.537822316" watchObservedRunningTime="2026-01-29 06:56:37.165561669 +0000 UTC m=+1283.540009269" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.169888 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.169936 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.169950 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.169983 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4g8t\" (UniqueName: \"kubernetes.io/projected/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-kube-api-access-v4g8t\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.175411 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" (UID: "b4f35fce-e9ad-4cf2-8944-638bb6d8df3d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.189095 5017 scope.go:117] "RemoveContainer" containerID="0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.214823 5017 scope.go:117] "RemoveContainer" containerID="379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf" Jan 29 06:56:37 crc kubenswrapper[5017]: E0129 06:56:37.216629 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf\": container with ID starting with 379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf not found: ID does not exist" containerID="379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.216687 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf"} err="failed to get container status \"379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf\": rpc error: code = NotFound desc = could not find container \"379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf\": container with ID starting with 379a4847043afedd4b8d7f3e7bdabc31ac5b6a7cf19f98ab70dbf37c72b2bdaf not found: ID does not exist" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.216727 5017 scope.go:117] "RemoveContainer" containerID="0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d" Jan 29 06:56:37 crc kubenswrapper[5017]: E0129 06:56:37.217326 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d\": container with ID starting with 0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d not found: ID does not exist" containerID="0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.217376 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d"} err="failed to get container status \"0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d\": rpc error: code = NotFound desc = could not find container \"0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d\": container with ID starting with 0267b992263d01175b34d7a4258615df11682c59c9d38bed4ca8eafd58cb489d not found: ID does not exist" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.272608 5017 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.424164 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.435863 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.451466 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:56:37 crc kubenswrapper[5017]: E0129 06:56:37.452065 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-metadata" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.452087 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-metadata" Jan 29 06:56:37 crc kubenswrapper[5017]: E0129 06:56:37.452132 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-log" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.452140 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-log" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.452343 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-log" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.452368 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" containerName="nova-metadata-metadata" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.453542 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.457108 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.457353 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.468852 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.584551 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-config-data\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.584616 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-logs\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.584647 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.584673 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.584701 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzdk\" (UniqueName: \"kubernetes.io/projected/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-kube-api-access-gkzdk\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.686698 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-config-data\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.686784 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-logs\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.686814 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.686842 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.686871 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkzdk\" (UniqueName: \"kubernetes.io/projected/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-kube-api-access-gkzdk\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.687418 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-logs\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.692348 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-config-data\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.693800 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.699404 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.714470 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkzdk\" (UniqueName: \"kubernetes.io/projected/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-kube-api-access-gkzdk\") pod \"nova-metadata-0\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " pod="openstack/nova-metadata-0" Jan 29 06:56:37 crc kubenswrapper[5017]: I0129 06:56:37.776392 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:56:38 crc kubenswrapper[5017]: I0129 06:56:38.281024 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:56:38 crc kubenswrapper[5017]: W0129 06:56:38.285060 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d94b8e3_f4a6_4fc2_af59_57b33254cd74.slice/crio-0c12aa5033bdeaa062e7b70d4a0c30a48c67fef5b076423507f8c68b0d684b32 WatchSource:0}: Error finding container 0c12aa5033bdeaa062e7b70d4a0c30a48c67fef5b076423507f8c68b0d684b32: Status 404 returned error can't find the container with id 0c12aa5033bdeaa062e7b70d4a0c30a48c67fef5b076423507f8c68b0d684b32 Jan 29 06:56:38 crc kubenswrapper[5017]: I0129 06:56:38.332290 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4f35fce-e9ad-4cf2-8944-638bb6d8df3d" path="/var/lib/kubelet/pods/b4f35fce-e9ad-4cf2-8944-638bb6d8df3d/volumes" Jan 29 06:56:39 crc kubenswrapper[5017]: I0129 06:56:39.134703 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d94b8e3-f4a6-4fc2-af59-57b33254cd74","Type":"ContainerStarted","Data":"5c19de805cb364596b5009993949de148a0ae873176b17c0e742ef83f5bf9bd2"} Jan 29 06:56:39 crc kubenswrapper[5017]: I0129 06:56:39.135313 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d94b8e3-f4a6-4fc2-af59-57b33254cd74","Type":"ContainerStarted","Data":"c43149c8d6f94b05d7adff740855d48a649c2f7ea9f1f958ed0221ac67602ca1"} Jan 29 06:56:39 crc kubenswrapper[5017]: I0129 06:56:39.135328 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d94b8e3-f4a6-4fc2-af59-57b33254cd74","Type":"ContainerStarted","Data":"0c12aa5033bdeaa062e7b70d4a0c30a48c67fef5b076423507f8c68b0d684b32"} Jan 29 06:56:39 crc kubenswrapper[5017]: I0129 06:56:39.167447 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.167421557 podStartE2EDuration="2.167421557s" podCreationTimestamp="2026-01-29 06:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:56:39.15904718 +0000 UTC m=+1285.533494790" watchObservedRunningTime="2026-01-29 06:56:39.167421557 +0000 UTC m=+1285.541869167" Jan 29 06:56:40 crc kubenswrapper[5017]: I0129 06:56:40.770707 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 06:56:42 crc kubenswrapper[5017]: I0129 06:56:42.776859 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 06:56:42 crc kubenswrapper[5017]: I0129 06:56:42.789271 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 06:56:44 crc kubenswrapper[5017]: I0129 06:56:44.832162 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 06:56:44 crc kubenswrapper[5017]: I0129 06:56:44.833401 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 06:56:45 crc kubenswrapper[5017]: I0129 06:56:45.770601 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 06:56:45 crc kubenswrapper[5017]: I0129 06:56:45.803889 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 06:56:45 crc kubenswrapper[5017]: I0129 06:56:45.850172 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 06:56:45 crc kubenswrapper[5017]: I0129 06:56:45.850162 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 06:56:46 crc kubenswrapper[5017]: I0129 06:56:46.250900 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 06:56:47 crc kubenswrapper[5017]: I0129 06:56:47.777065 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 06:56:47 crc kubenswrapper[5017]: I0129 06:56:47.777491 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 06:56:48 crc kubenswrapper[5017]: I0129 06:56:48.796129 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 06:56:48 crc kubenswrapper[5017]: I0129 06:56:48.796129 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 06:56:53 crc kubenswrapper[5017]: I0129 06:56:53.386403 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 06:56:54 crc kubenswrapper[5017]: I0129 06:56:54.839449 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 06:56:54 crc kubenswrapper[5017]: I0129 06:56:54.840532 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 06:56:54 crc kubenswrapper[5017]: I0129 06:56:54.841293 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 06:56:54 crc kubenswrapper[5017]: I0129 06:56:54.848682 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 06:56:55 crc kubenswrapper[5017]: I0129 06:56:55.313657 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 06:56:55 crc kubenswrapper[5017]: I0129 06:56:55.326387 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 06:56:56 crc kubenswrapper[5017]: I0129 06:56:56.539210 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:56:56 crc kubenswrapper[5017]: I0129 06:56:56.539305 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:56:57 crc kubenswrapper[5017]: I0129 06:56:57.783974 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 06:56:57 crc kubenswrapper[5017]: I0129 06:56:57.785486 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 06:56:57 crc kubenswrapper[5017]: I0129 06:56:57.790521 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 06:56:58 crc kubenswrapper[5017]: I0129 06:56:58.368404 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.271835 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.273938 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="abd151c3-f255-4647-a923-3176a7dae25a" containerName="openstackclient" containerID="cri-o://937202b40c1e1fb6b564b0a310d47bb37664160e11586ea04719f1fc9662dc75" gracePeriod=2 Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.315264 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.471035 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-381c-account-create-update-ngzmx"] Jan 29 06:57:16 crc kubenswrapper[5017]: E0129 06:57:16.471559 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd151c3-f255-4647-a923-3176a7dae25a" containerName="openstackclient" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.471586 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd151c3-f255-4647-a923-3176a7dae25a" containerName="openstackclient" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.471783 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd151c3-f255-4647-a923-3176a7dae25a" containerName="openstackclient" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.472565 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.476898 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.494057 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-01d7-account-create-update-n9thm"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.495518 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.499106 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.503044 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.513644 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-01d7-account-create-update-n9thm"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.533241 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-381c-account-create-update-ngzmx"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.596032 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4qvq2"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.597553 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.603950 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.625719 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-01d7-account-create-update-bpmlf"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.634523 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlgtf\" (UniqueName: \"kubernetes.io/projected/615b2757-5eab-4454-95da-663755846932-kube-api-access-jlgtf\") pod \"cinder-01d7-account-create-update-n9thm\" (UID: \"615b2757-5eab-4454-95da-663755846932\") " pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.634733 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66475828-326a-4b57-baea-e209e519d639-operator-scripts\") pod \"barbican-381c-account-create-update-ngzmx\" (UID: \"66475828-326a-4b57-baea-e209e519d639\") " pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.634885 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8db4\" (UniqueName: \"kubernetes.io/projected/66475828-326a-4b57-baea-e209e519d639-kube-api-access-g8db4\") pod \"barbican-381c-account-create-update-ngzmx\" (UID: \"66475828-326a-4b57-baea-e209e519d639\") " pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.635175 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615b2757-5eab-4454-95da-663755846932-operator-scripts\") pod \"cinder-01d7-account-create-update-n9thm\" (UID: \"615b2757-5eab-4454-95da-663755846932\") " pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:16 crc kubenswrapper[5017]: E0129 06:57:16.636042 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:16 crc kubenswrapper[5017]: E0129 06:57:16.636099 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data podName:d30b013f-453f-4282-8b22-2a5270027828 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:17.136079764 +0000 UTC m=+1323.510527374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data") pod "rabbitmq-cell1-server-0" (UID: "d30b013f-453f-4282-8b22-2a5270027828") : configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.656419 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-01d7-account-create-update-bpmlf"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.664662 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-381c-account-create-update-d4zjm"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.682341 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4qvq2"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.698433 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-381c-account-create-update-d4zjm"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.714925 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2d72-account-create-update-hj9h9"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.716677 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.749523 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615b2757-5eab-4454-95da-663755846932-operator-scripts\") pod \"cinder-01d7-account-create-update-n9thm\" (UID: \"615b2757-5eab-4454-95da-663755846932\") " pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.749610 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24xhj\" (UniqueName: \"kubernetes.io/projected/7133c436-5656-4d57-aca3-64e9542ef299-kube-api-access-24xhj\") pod \"neutron-2d72-account-create-update-hj9h9\" (UID: \"7133c436-5656-4d57-aca3-64e9542ef299\") " pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.749661 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlgtf\" (UniqueName: \"kubernetes.io/projected/615b2757-5eab-4454-95da-663755846932-kube-api-access-jlgtf\") pod \"cinder-01d7-account-create-update-n9thm\" (UID: \"615b2757-5eab-4454-95da-663755846932\") " pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.749724 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7133c436-5656-4d57-aca3-64e9542ef299-operator-scripts\") pod \"neutron-2d72-account-create-update-hj9h9\" (UID: \"7133c436-5656-4d57-aca3-64e9542ef299\") " pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.749759 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66475828-326a-4b57-baea-e209e519d639-operator-scripts\") pod \"barbican-381c-account-create-update-ngzmx\" (UID: \"66475828-326a-4b57-baea-e209e519d639\") " pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.749780 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts\") pod \"root-account-create-update-4qvq2\" (UID: \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\") " pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.749833 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8db4\" (UniqueName: \"kubernetes.io/projected/66475828-326a-4b57-baea-e209e519d639-kube-api-access-g8db4\") pod \"barbican-381c-account-create-update-ngzmx\" (UID: \"66475828-326a-4b57-baea-e209e519d639\") " pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.749854 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbp4\" (UniqueName: \"kubernetes.io/projected/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-kube-api-access-dlbp4\") pod \"root-account-create-update-4qvq2\" (UID: \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\") " pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.750564 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615b2757-5eab-4454-95da-663755846932-operator-scripts\") pod \"cinder-01d7-account-create-update-n9thm\" (UID: \"615b2757-5eab-4454-95da-663755846932\") " pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.751360 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66475828-326a-4b57-baea-e209e519d639-operator-scripts\") pod \"barbican-381c-account-create-update-ngzmx\" (UID: \"66475828-326a-4b57-baea-e209e519d639\") " pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.752298 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.761128 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-btm9w"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.776172 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-btm9w"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.805120 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8db4\" (UniqueName: \"kubernetes.io/projected/66475828-326a-4b57-baea-e209e519d639-kube-api-access-g8db4\") pod \"barbican-381c-account-create-update-ngzmx\" (UID: \"66475828-326a-4b57-baea-e209e519d639\") " pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.810608 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.811072 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" containerName="ovn-northd" containerID="cri-o://5856b1139cb9ac964f5c7a59c84bd77817a7664b0ef60849ed25c5857c226d37" gracePeriod=30 Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.811256 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" containerName="openstack-network-exporter" containerID="cri-o://2d5d0c8760d913b9ab3eeaa636e31dd474ed2dad3b92862aa8e197299b972bee" gracePeriod=30 Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.831075 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlgtf\" (UniqueName: \"kubernetes.io/projected/615b2757-5eab-4454-95da-663755846932-kube-api-access-jlgtf\") pod \"cinder-01d7-account-create-update-n9thm\" (UID: \"615b2757-5eab-4454-95da-663755846932\") " pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.831576 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.851246 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7133c436-5656-4d57-aca3-64e9542ef299-operator-scripts\") pod \"neutron-2d72-account-create-update-hj9h9\" (UID: \"7133c436-5656-4d57-aca3-64e9542ef299\") " pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.851733 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts\") pod \"root-account-create-update-4qvq2\" (UID: \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\") " pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.851809 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlbp4\" (UniqueName: \"kubernetes.io/projected/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-kube-api-access-dlbp4\") pod \"root-account-create-update-4qvq2\" (UID: \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\") " pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.851877 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24xhj\" (UniqueName: \"kubernetes.io/projected/7133c436-5656-4d57-aca3-64e9542ef299-kube-api-access-24xhj\") pod \"neutron-2d72-account-create-update-hj9h9\" (UID: \"7133c436-5656-4d57-aca3-64e9542ef299\") " pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.853006 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7133c436-5656-4d57-aca3-64e9542ef299-operator-scripts\") pod \"neutron-2d72-account-create-update-hj9h9\" (UID: \"7133c436-5656-4d57-aca3-64e9542ef299\") " pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.853591 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts\") pod \"root-account-create-update-4qvq2\" (UID: \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\") " pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.859712 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.914323 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2d72-account-create-update-hj9h9"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.926829 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24xhj\" (UniqueName: \"kubernetes.io/projected/7133c436-5656-4d57-aca3-64e9542ef299-kube-api-access-24xhj\") pod \"neutron-2d72-account-create-update-hj9h9\" (UID: \"7133c436-5656-4d57-aca3-64e9542ef299\") " pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:16 crc kubenswrapper[5017]: E0129 06:57:16.963672 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5856b1139cb9ac964f5c7a59c84bd77817a7664b0ef60849ed25c5857c226d37" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.965257 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2d72-account-create-update-cdsn8"] Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.983416 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlbp4\" (UniqueName: \"kubernetes.io/projected/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-kube-api-access-dlbp4\") pod \"root-account-create-update-4qvq2\" (UID: \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\") " pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:16 crc kubenswrapper[5017]: I0129 06:57:16.996225 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2d72-account-create-update-cdsn8"] Jan 29 06:57:17 crc kubenswrapper[5017]: I0129 06:57:17.009318 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 06:57:17 crc kubenswrapper[5017]: I0129 06:57:17.030611 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tmkrp"] Jan 29 06:57:17 crc kubenswrapper[5017]: E0129 06:57:17.040531 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5856b1139cb9ac964f5c7a59c84bd77817a7664b0ef60849ed25c5857c226d37" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 06:57:17 crc kubenswrapper[5017]: E0129 06:57:17.060679 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5856b1139cb9ac964f5c7a59c84bd77817a7664b0ef60849ed25c5857c226d37" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 06:57:17 crc kubenswrapper[5017]: E0129 06:57:17.060766 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" containerName="ovn-northd" Jan 29 06:57:17 crc kubenswrapper[5017]: I0129 06:57:17.077035 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 06:57:17 crc kubenswrapper[5017]: I0129 06:57:17.077746 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerName="openstack-network-exporter" containerID="cri-o://dd1494bb8f06d376a772d0890f42484f96047c14209cf64f5a4fb14363143583" gracePeriod=300 Jan 29 06:57:17 crc kubenswrapper[5017]: I0129 06:57:17.089475 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:17 crc kubenswrapper[5017]: E0129 06:57:17.092101 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 06:57:17 crc kubenswrapper[5017]: E0129 06:57:17.092171 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data podName:5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a nodeName:}" failed. No retries permitted until 2026-01-29 06:57:17.592150291 +0000 UTC m=+1323.966597901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data") pod "rabbitmq-server-0" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a") : configmap "rabbitmq-config-data" not found Jan 29 06:57:17 crc kubenswrapper[5017]: I0129 06:57:17.121697 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tmkrp"] Jan 29 06:57:17 crc kubenswrapper[5017]: I0129 06:57:17.147474 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a25d-account-create-update-fzx89"] Jan 29 06:57:17 crc kubenswrapper[5017]: I0129 06:57:17.178000 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-c99rc"] Jan 29 06:57:17 crc kubenswrapper[5017]: E0129 06:57:17.194372 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:17 crc kubenswrapper[5017]: E0129 06:57:17.194470 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data podName:d30b013f-453f-4282-8b22-2a5270027828 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:18.194452395 +0000 UTC m=+1324.568900005 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data") pod "rabbitmq-cell1-server-0" (UID: "d30b013f-453f-4282-8b22-2a5270027828") : configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.267306 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.347817 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-c99rc"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.438109 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a25d-account-create-update-fzx89"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.468783 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.507393 5017 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/ovsdbserver-sb-0" secret="" err="secret \"ovncluster-ovndbcluster-sb-dockercfg-hngbq\" not found" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.516735 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2qnw2"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.534115 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2qnw2"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.589051 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-mrhnf"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.658892 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-bnw77"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.659129 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-bnw77" podUID="572c6985-85a2-4a6d-8581-75b8c6b87322" containerName="openstack-network-exporter" containerID="cri-o://5d0c3be2b0978eb8f6117644b27656fb862fbe247178603731b87946cc5367a6" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:17.706366 5017 configmap.go:193] Couldn't get configMap openstack/ovndbcluster-sb-scripts: configmap "ovndbcluster-sb-scripts" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:17.706466 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts podName:08c15cf8-f386-428a-a94a-c33598b182a9 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:18.206448468 +0000 UTC m=+1324.580896078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts") pod "ovsdbserver-sb-0" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9") : configmap "ovndbcluster-sb-scripts" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:17.706869 5017 configmap.go:193] Couldn't get configMap openstack/ovndbcluster-sb-config: configmap "ovndbcluster-sb-config" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:17.706976 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config podName:08c15cf8-f386-428a-a94a-c33598b182a9 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:18.20693245 +0000 UTC m=+1324.581380060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config") pod "ovsdbserver-sb-0" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9") : configmap "ovndbcluster-sb-config" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:17.707609 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:17.707637 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data podName:5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a nodeName:}" failed. No retries permitted until 2026-01-29 06:57:18.707629178 +0000 UTC m=+1325.082076788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data") pod "rabbitmq-server-0" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a") : configmap "rabbitmq-config-data" not found Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.730085 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rtkrb"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.740315 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_02965a93-9a4c-4118-a030-0271f53a61a1/ovn-northd/0.log" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.740354 5017 generic.go:334] "Generic (PLEG): container finished" podID="02965a93-9a4c-4118-a030-0271f53a61a1" containerID="2d5d0c8760d913b9ab3eeaa636e31dd474ed2dad3b92862aa8e197299b972bee" exitCode=2 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.740375 5017 generic.go:334] "Generic (PLEG): container finished" podID="02965a93-9a4c-4118-a030-0271f53a61a1" containerID="5856b1139cb9ac964f5c7a59c84bd77817a7664b0ef60849ed25c5857c226d37" exitCode=143 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.740433 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"02965a93-9a4c-4118-a030-0271f53a61a1","Type":"ContainerDied","Data":"2d5d0c8760d913b9ab3eeaa636e31dd474ed2dad3b92862aa8e197299b972bee"} Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.740460 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"02965a93-9a4c-4118-a030-0271f53a61a1","Type":"ContainerDied","Data":"5856b1139cb9ac964f5c7a59c84bd77817a7664b0ef60849ed25c5857c226d37"} Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.752425 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0fca-account-create-update-7k6jq"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.754266 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.761019 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-q6477"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.762385 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.773316 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0fca-account-create-update-7k6jq"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.774750 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.783036 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.790390 5017 generic.go:334] "Generic (PLEG): container finished" podID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerID="dd1494bb8f06d376a772d0890f42484f96047c14209cf64f5a4fb14363143583" exitCode=2 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.790652 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"58a02d03-f3a8-4193-ba1d-623ecaa62fe9","Type":"ContainerDied","Data":"dd1494bb8f06d376a772d0890f42484f96047c14209cf64f5a4fb14363143583"} Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.790750 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" containerName="openstack-network-exporter" containerID="cri-o://970018d4bd2130b0277d105c14cb252f0b9fe0e15de053637b5663a6a7609e01" gracePeriod=300 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.813007 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-q6477"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.845035 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-q5xff"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.908013 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-q5xff"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.922354 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9v8g\" (UniqueName: \"kubernetes.io/projected/0adabddf-74aa-416a-afef-b24b39897f9c-kube-api-access-p9v8g\") pod \"nova-cell1-fd15-account-create-update-q6477\" (UID: \"0adabddf-74aa-416a-afef-b24b39897f9c\") " pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.922426 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrn9\" (UniqueName: \"kubernetes.io/projected/c55eb047-3255-4a79-8e32-dfb786de8794-kube-api-access-pkrn9\") pod \"nova-api-0fca-account-create-update-7k6jq\" (UID: \"c55eb047-3255-4a79-8e32-dfb786de8794\") " pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.922467 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c55eb047-3255-4a79-8e32-dfb786de8794-operator-scripts\") pod \"nova-api-0fca-account-create-update-7k6jq\" (UID: \"c55eb047-3255-4a79-8e32-dfb786de8794\") " pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.922515 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts\") pod \"nova-cell1-fd15-account-create-update-q6477\" (UID: \"0adabddf-74aa-416a-afef-b24b39897f9c\") " pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:17.958225 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wjsjf"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.010035 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6d6c4c6dc8-5jbvh"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.010391 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6d6c4c6dc8-5jbvh" podUID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerName="placement-log" containerID="cri-o://2eee62f312708ba7438eddc1dabc0de687bb3ae24d41ed266160597a9d245df9" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.010992 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6d6c4c6dc8-5jbvh" podUID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerName="placement-api" containerID="cri-o://443395d71d852c3ec070ffebcf4c6e95bc2745cdc77bf998d3b62968c01056ef" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.024416 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9v8g\" (UniqueName: \"kubernetes.io/projected/0adabddf-74aa-416a-afef-b24b39897f9c-kube-api-access-p9v8g\") pod \"nova-cell1-fd15-account-create-update-q6477\" (UID: \"0adabddf-74aa-416a-afef-b24b39897f9c\") " pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.024505 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkrn9\" (UniqueName: \"kubernetes.io/projected/c55eb047-3255-4a79-8e32-dfb786de8794-kube-api-access-pkrn9\") pod \"nova-api-0fca-account-create-update-7k6jq\" (UID: \"c55eb047-3255-4a79-8e32-dfb786de8794\") " pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.024597 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c55eb047-3255-4a79-8e32-dfb786de8794-operator-scripts\") pod \"nova-api-0fca-account-create-update-7k6jq\" (UID: \"c55eb047-3255-4a79-8e32-dfb786de8794\") " pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.024663 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts\") pod \"nova-cell1-fd15-account-create-update-q6477\" (UID: \"0adabddf-74aa-416a-afef-b24b39897f9c\") " pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.025623 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts\") pod \"nova-cell1-fd15-account-create-update-q6477\" (UID: \"0adabddf-74aa-416a-afef-b24b39897f9c\") " pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.026758 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c55eb047-3255-4a79-8e32-dfb786de8794-operator-scripts\") pod \"nova-api-0fca-account-create-update-7k6jq\" (UID: \"c55eb047-3255-4a79-8e32-dfb786de8794\") " pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.068055 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wjsjf"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.071857 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkrn9\" (UniqueName: \"kubernetes.io/projected/c55eb047-3255-4a79-8e32-dfb786de8794-kube-api-access-pkrn9\") pod \"nova-api-0fca-account-create-update-7k6jq\" (UID: \"c55eb047-3255-4a79-8e32-dfb786de8794\") " pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.072779 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9v8g\" (UniqueName: \"kubernetes.io/projected/0adabddf-74aa-416a-afef-b24b39897f9c-kube-api-access-p9v8g\") pod \"nova-cell1-fd15-account-create-update-q6477\" (UID: \"0adabddf-74aa-416a-afef-b24b39897f9c\") " pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.093839 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-cxtdk"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.094215 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" podUID="c653a323-c9c2-42f2-a2af-125828234475" containerName="dnsmasq-dns" containerID="cri-o://fa179775b1ca9fee80f8f1451a4e94f0855c85741c538bb2b667dddb2502f32c" gracePeriod=10 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.139237 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.141562 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerName="cinder-scheduler" containerID="cri-o://27c14ee221c2ba154a659bac681ce15f66d62c55c0cd0468426e11a64f23d5e9" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.142004 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerName="probe" containerID="cri-o://999837a7b08863c4bad372817a69589db1cc60b86b13c6914e11866e71643157" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.229720 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.230086 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" containerName="cinder-api-log" containerID="cri-o://9244b1fe8ffaffe1ec210b4f9fe46f1fb5d4f2443ca7e7e703e6ca10fd8766d0" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.230646 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" containerName="cinder-api" containerID="cri-o://1b90244a79764c9e3b9a5b69c14c39546fc533d032150453b20e901a8805b3fe" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.233883 5017 configmap.go:193] Couldn't get configMap openstack/ovndbcluster-sb-config: configmap "ovndbcluster-sb-config" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.233982 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config podName:08c15cf8-f386-428a-a94a-c33598b182a9 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:19.233946075 +0000 UTC m=+1325.608393685 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config") pod "ovsdbserver-sb-0" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9") : configmap "ovndbcluster-sb-config" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.234023 5017 configmap.go:193] Couldn't get configMap openstack/ovndbcluster-sb-scripts: configmap "ovndbcluster-sb-scripts" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.234043 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts podName:08c15cf8-f386-428a-a94a-c33598b182a9 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:19.234036377 +0000 UTC m=+1325.608483987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts") pod "ovsdbserver-sb-0" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9") : configmap "ovndbcluster-sb-scripts" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.234070 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.234094 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data podName:d30b013f-453f-4282-8b22-2a5270027828 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:20.234082288 +0000 UTC m=+1326.608529898 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data") pod "rabbitmq-cell1-server-0" (UID: "d30b013f-453f-4282-8b22-2a5270027828") : configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.251415 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0fca-account-create-update-pgcmp"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.266285 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0fca-account-create-update-pgcmp"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.306020 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-cx6sh"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.310560 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-cx6sh"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.481692 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerName="ovsdbserver-nb" containerID="cri-o://8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735" gracePeriod=299 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.547666 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" containerName="ovsdbserver-sb" containerID="cri-o://96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac" gracePeriod=300 Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.627385 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac is running failed: container process not found" containerID="96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.665676 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac is running failed: container process not found" containerID="96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.673756 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac is running failed: container process not found" containerID="96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.673832 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" containerName="ovsdbserver-sb" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.701288 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2179f134-a047-4000-b58b-4755df9f56b7" path="/var/lib/kubelet/pods/2179f134-a047-4000-b58b-4755df9f56b7/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.713412 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e5ea57-0c73-4c76-bbcb-6d3b665b6226" path="/var/lib/kubelet/pods/31e5ea57-0c73-4c76-bbcb-6d3b665b6226/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.770912 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc3448b-f305-47b7-b2f9-32b61477ac21" path="/var/lib/kubelet/pods/3cc3448b-f305-47b7-b2f9-32b61477ac21/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.772069 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.772151 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d79ee9c-086e-405e-a8d6-478823059f00" path="/var/lib/kubelet/pods/7d79ee9c-086e-405e-a8d6-478823059f00/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: E0129 06:57:18.772160 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data podName:5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a nodeName:}" failed. No retries permitted until 2026-01-29 06:57:20.772137036 +0000 UTC m=+1327.146584646 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data") pod "rabbitmq-server-0" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a") : configmap "rabbitmq-config-data" not found Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.772740 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ce45d0-5d2b-42bf-8601-da8dbce0d3da" path="/var/lib/kubelet/pods/93ce45d0-5d2b-42bf-8601-da8dbce0d3da/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.776627 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5eea12-20bf-45b6-b989-f77529ea2f04" path="/var/lib/kubelet/pods/af5eea12-20bf-45b6-b989-f77529ea2f04/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.777382 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67befda-4537-4dc6-bf3d-c7f971a7b825" path="/var/lib/kubelet/pods/b67befda-4537-4dc6-bf3d-c7f971a7b825/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.778064 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06828d9-6c4d-4228-adc6-3788f22ae732" path="/var/lib/kubelet/pods/c06828d9-6c4d-4228-adc6-3788f22ae732/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.782455 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1" path="/var/lib/kubelet/pods/d90f5e4e-4114-4f2d-9dc8-ff0b0167cdd1/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.784435 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c059fe-689f-478d-8e75-83d893147d85" path="/var/lib/kubelet/pods/e6c059fe-689f-478d-8e75-83d893147d85/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.785189 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea1b3f91-cc75-43b7-838d-837b273b3509" path="/var/lib/kubelet/pods/ea1b3f91-cc75-43b7-838d-837b273b3509/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.785850 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc785a3-2b30-4a73-b98a-1f6d405efa60" path="/var/lib/kubelet/pods/ecc785a3-2b30-4a73-b98a-1f6d405efa60/volumes" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.791794 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wdd2j"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.791827 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9727-account-create-update-khlxg"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.791844 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wdd2j"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.791865 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9727-account-create-update-khlxg"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.791882 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-fc5n9"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.791895 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-fc5n9"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.791908 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-s5ffm"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.791920 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-s5ffm"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.791933 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.792268 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerName="glance-log" containerID="cri-o://e97d4813efcf24ccacbdbd3a38be06d62a0d548850293f33dfd1414e7cf3dbe7" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.792456 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerName="glance-httpd" containerID="cri-o://dbfb5839d0de6937d94e1b06808176fec9fce89e3d52e262a3d51db47ee776af" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.798360 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4038-account-create-update-sv5xr"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.837237 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4038-account-create-update-sv5xr"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.856032 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b9cd4b645-x8pg4"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.856426 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b9cd4b645-x8pg4" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerName="neutron-api" containerID="cri-o://f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.857049 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b9cd4b645-x8pg4" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerName="neutron-httpd" containerID="cri-o://351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.875782 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-wxns2"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.886984 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-wxns2"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.903727 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bnw77_572c6985-85a2-4a6d-8581-75b8c6b87322/openstack-network-exporter/0.log" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.903786 5017 generic.go:334] "Generic (PLEG): container finished" podID="572c6985-85a2-4a6d-8581-75b8c6b87322" containerID="5d0c3be2b0978eb8f6117644b27656fb862fbe247178603731b87946cc5367a6" exitCode=2 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.903892 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bnw77" event={"ID":"572c6985-85a2-4a6d-8581-75b8c6b87322","Type":"ContainerDied","Data":"5d0c3be2b0978eb8f6117644b27656fb862fbe247178603731b87946cc5367a6"} Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.941385 5017 generic.go:334] "Generic (PLEG): container finished" podID="abd151c3-f255-4647-a923-3176a7dae25a" containerID="937202b40c1e1fb6b564b0a310d47bb37664160e11586ea04719f1fc9662dc75" exitCode=137 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.950421 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.971778 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_08c15cf8-f386-428a-a94a-c33598b182a9/ovsdbserver-sb/0.log" Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.972569 5017 generic.go:334] "Generic (PLEG): container finished" podID="08c15cf8-f386-428a-a94a-c33598b182a9" containerID="970018d4bd2130b0277d105c14cb252f0b9fe0e15de053637b5663a6a7609e01" exitCode=2 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.972653 5017 generic.go:334] "Generic (PLEG): container finished" podID="08c15cf8-f386-428a-a94a-c33598b182a9" containerID="96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac" exitCode=143 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.972843 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"08c15cf8-f386-428a-a94a-c33598b182a9","Type":"ContainerDied","Data":"970018d4bd2130b0277d105c14cb252f0b9fe0e15de053637b5663a6a7609e01"} Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.973001 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"08c15cf8-f386-428a-a94a-c33598b182a9","Type":"ContainerDied","Data":"96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac"} Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.980907 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.981458 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-server" containerID="cri-o://78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.981944 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="swift-recon-cron" containerID="cri-o://a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982009 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="rsync" containerID="cri-o://eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982042 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-expirer" containerID="cri-o://bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982073 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-updater" containerID="cri-o://1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982111 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-auditor" containerID="cri-o://54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982146 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-replicator" containerID="cri-o://b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982180 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-server" containerID="cri-o://ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982239 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-updater" containerID="cri-o://3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982289 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-auditor" containerID="cri-o://fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982321 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-replicator" containerID="cri-o://c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982353 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-server" containerID="cri-o://18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982382 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-reaper" containerID="cri-o://06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982413 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-auditor" containerID="cri-o://999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea" gracePeriod=30 Jan 29 06:57:18 crc kubenswrapper[5017]: I0129 06:57:18.982446 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-replicator" containerID="cri-o://51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc" gracePeriod=30 Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.011275 5017 generic.go:334] "Generic (PLEG): container finished" podID="c653a323-c9c2-42f2-a2af-125828234475" containerID="fa179775b1ca9fee80f8f1451a4e94f0855c85741c538bb2b667dddb2502f32c" exitCode=0 Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.011366 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" event={"ID":"c653a323-c9c2-42f2-a2af-125828234475","Type":"ContainerDied","Data":"fa179775b1ca9fee80f8f1451a4e94f0855c85741c538bb2b667dddb2502f32c"} Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.059544 5017 generic.go:334] "Generic (PLEG): container finished" podID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerID="8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735" exitCode=0 Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.060316 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"58a02d03-f3a8-4193-ba1d-623ecaa62fe9","Type":"ContainerDied","Data":"8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735"} Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.077245 5017 generic.go:334] "Generic (PLEG): container finished" podID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerID="2eee62f312708ba7438eddc1dabc0de687bb3ae24d41ed266160597a9d245df9" exitCode=143 Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.077375 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.077411 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c4c6dc8-5jbvh" event={"ID":"dc01ff67-baeb-47d1-90f5-9cff65c9dffa","Type":"ContainerDied","Data":"2eee62f312708ba7438eddc1dabc0de687bb3ae24d41ed266160597a9d245df9"} Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.077668 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-log" containerID="cri-o://9faf0d173c0b59296fed359030686d4a096d13b44a22d7aecdca4136e3a15a7a" gracePeriod=30 Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.082239 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-httpd" containerID="cri-o://0c316a09cd764cd0d7717d88d306f363aee8a406a7aab49e4974af5176af2934" gracePeriod=30 Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.100939 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-5997q"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.112262 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-5997q"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.121137 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-hbwpv"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.145767 5017 generic.go:334] "Generic (PLEG): container finished" podID="71f6aede-754b-476f-8082-78f0e50b6a39" containerID="9244b1fe8ffaffe1ec210b4f9fe46f1fb5d4f2443ca7e7e703e6ca10fd8766d0" exitCode=143 Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.145861 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71f6aede-754b-476f-8082-78f0e50b6a39","Type":"ContainerDied","Data":"9244b1fe8ffaffe1ec210b4f9fe46f1fb5d4f2443ca7e7e703e6ca10fd8766d0"} Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.150013 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-hbwpv"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.158583 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d30b013f-453f-4282-8b22-2a5270027828" containerName="rabbitmq" containerID="cri-o://023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d" gracePeriod=604800 Jan 29 06:57:19 crc kubenswrapper[5017]: E0129 06:57:19.203839 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58a02d03_f3a8_4193_ba1d_623ecaa62fe9.slice/crio-conmon-8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc01ff67_baeb_47d1_90f5_9cff65c9dffa.slice/crio-conmon-2eee62f312708ba7438eddc1dabc0de687bb3ae24d41ed266160597a9d245df9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c15cf8_f386_428a_a94a_c33598b182a9.slice/crio-96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71f6aede_754b_476f_8082_78f0e50b6a39.slice/crio-conmon-9244b1fe8ffaffe1ec210b4f9fe46f1fb5d4f2443ca7e7e703e6ca10fd8766d0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58a02d03_f3a8_4193_ba1d_623ecaa62fe9.slice/crio-conmon-dd1494bb8f06d376a772d0890f42484f96047c14209cf64f5a4fb14363143583.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58a02d03_f3a8_4193_ba1d_623ecaa62fe9.slice/crio-8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabd151c3_f255_4647_a923_3176a7dae25a.slice/crio-937202b40c1e1fb6b564b0a310d47bb37664160e11586ea04719f1fc9662dc75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod572c6985_85a2_4a6d_8581_75b8c6b87322.slice/crio-5d0c3be2b0978eb8f6117644b27656fb862fbe247178603731b87946cc5367a6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode41c27f8_0c27_4e3d_83b1_62a61abb4faf.slice/crio-conmon-e97d4813efcf24ccacbdbd3a38be06d62a0d548850293f33dfd1414e7cf3dbe7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc653a323_c9c2_42f2_a2af_125828234475.slice/crio-conmon-fa179775b1ca9fee80f8f1451a4e94f0855c85741c538bb2b667dddb2502f32c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabd151c3_f255_4647_a923_3176a7dae25a.slice/crio-conmon-937202b40c1e1fb6b564b0a310d47bb37664160e11586ea04719f1fc9662dc75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc653a323_c9c2_42f2_a2af_125828234475.slice/crio-fa179775b1ca9fee80f8f1451a4e94f0855c85741c538bb2b667dddb2502f32c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c15cf8_f386_428a_a94a_c33598b182a9.slice/crio-conmon-96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c15cf8_f386_428a_a94a_c33598b182a9.slice/crio-970018d4bd2130b0277d105c14cb252f0b9fe0e15de053637b5663a6a7609e01.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4fe6966_2467_4c3b_b907_d3a8e88eb497.slice/crio-351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02965a93_9a4c_4118_a030_0271f53a61a1.slice/crio-conmon-5856b1139cb9ac964f5c7a59c84bd77817a7664b0ef60849ed25c5857c226d37.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c15cf8_f386_428a_a94a_c33598b182a9.slice/crio-conmon-970018d4bd2130b0277d105c14cb252f0b9fe0e15de053637b5663a6a7609e01.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode41c27f8_0c27_4e3d_83b1_62a61abb4faf.slice/crio-e97d4813efcf24ccacbdbd3a38be06d62a0d548850293f33dfd1414e7cf3dbe7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod572c6985_85a2_4a6d_8581_75b8c6b87322.slice/crio-conmon-5d0c3be2b0978eb8f6117644b27656fb862fbe247178603731b87946cc5367a6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d082326_495c_4078_974e_714379243884.slice/crio-conmon-b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.218153 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-01d7-account-create-update-n9thm"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.243432 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-bc7969485-9cbzw"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.244529 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-bc7969485-9cbzw" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-httpd" containerID="cri-o://dc162fbc000aa89fe180adad58f6d207b657135b6fb5dd62ecbf93d7c4e1bbe0" gracePeriod=30 Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.245534 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-bc7969485-9cbzw" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-server" containerID="cri-o://9d3b90e2a526d03bc001996d30c041b59b2169975c9838eadd4d27870c43ad13" gracePeriod=30 Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.246383 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.262530 5017 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/nova-cell1-fd15-account-create-update-q6477" secret="" err="secret \"galera-openstack-cell1-dockercfg-wjt4v\" not found" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.262610 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.264767 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-gmq8s"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.280181 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-gmq8s"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.290105 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_02965a93-9a4c-4118-a030-0271f53a61a1/ovn-northd/0.log" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.290272 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.306522 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2d72-account-create-update-hj9h9"] Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.310286 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.310782 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330006 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-swift-storage-0\") pod \"c653a323-c9c2-42f2-a2af-125828234475\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330477 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config-secret\") pod \"abd151c3-f255-4647-a923-3176a7dae25a\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330552 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncf76\" (UniqueName: \"kubernetes.io/projected/c653a323-c9c2-42f2-a2af-125828234475-kube-api-access-ncf76\") pod \"c653a323-c9c2-42f2-a2af-125828234475\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330584 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-config\") pod \"02965a93-9a4c-4118-a030-0271f53a61a1\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330638 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config\") pod \"abd151c3-f255-4647-a923-3176a7dae25a\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330683 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-scripts\") pod \"02965a93-9a4c-4118-a030-0271f53a61a1\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330725 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-rundir\") pod \"02965a93-9a4c-4118-a030-0271f53a61a1\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330754 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-nb\") pod \"c653a323-c9c2-42f2-a2af-125828234475\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330794 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-northd-tls-certs\") pod \"02965a93-9a4c-4118-a030-0271f53a61a1\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330834 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-metrics-certs-tls-certs\") pod \"02965a93-9a4c-4118-a030-0271f53a61a1\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330881 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-combined-ca-bundle\") pod \"abd151c3-f255-4647-a923-3176a7dae25a\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330911 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-combined-ca-bundle\") pod \"02965a93-9a4c-4118-a030-0271f53a61a1\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.330992 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-sb\") pod \"c653a323-c9c2-42f2-a2af-125828234475\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.331043 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp29b\" (UniqueName: \"kubernetes.io/projected/abd151c3-f255-4647-a923-3176a7dae25a-kube-api-access-qp29b\") pod \"abd151c3-f255-4647-a923-3176a7dae25a\" (UID: \"abd151c3-f255-4647-a923-3176a7dae25a\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.331084 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-config\") pod \"c653a323-c9c2-42f2-a2af-125828234475\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.331143 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-svc\") pod \"c653a323-c9c2-42f2-a2af-125828234475\" (UID: \"c653a323-c9c2-42f2-a2af-125828234475\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.331187 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5qh2\" (UniqueName: \"kubernetes.io/projected/02965a93-9a4c-4118-a030-0271f53a61a1-kube-api-access-k5qh2\") pod \"02965a93-9a4c-4118-a030-0271f53a61a1\" (UID: \"02965a93-9a4c-4118-a030-0271f53a61a1\") " Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.345114 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-t6sx7"] Jan 29 06:57:19 crc kubenswrapper[5017]: E0129 06:57:19.352577 5017 configmap.go:193] Couldn't get configMap openstack/ovndbcluster-sb-scripts: configmap "ovndbcluster-sb-scripts" not found Jan 29 06:57:19 crc kubenswrapper[5017]: E0129 06:57:19.352660 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts podName:08c15cf8-f386-428a-a94a-c33598b182a9 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:21.352640225 +0000 UTC m=+1327.727087835 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts") pod "ovsdbserver-sb-0" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9") : configmap "ovndbcluster-sb-scripts" not found Jan 29 06:57:19 crc kubenswrapper[5017]: E0129 06:57:19.357122 5017 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 29 06:57:19 crc kubenswrapper[5017]: E0129 06:57:19.357214 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts podName:0adabddf-74aa-416a-afef-b24b39897f9c nodeName:}" failed. No retries permitted until 2026-01-29 06:57:19.857191738 +0000 UTC m=+1326.231639348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts") pod "nova-cell1-fd15-account-create-update-q6477" (UID: "0adabddf-74aa-416a-afef-b24b39897f9c") : configmap "openstack-cell1-scripts" not found Jan 29 06:57:19 crc kubenswrapper[5017]: E0129 06:57:19.357259 5017 configmap.go:193] Couldn't get configMap openstack/ovndbcluster-sb-config: configmap "ovndbcluster-sb-config" not found Jan 29 06:57:19 crc kubenswrapper[5017]: E0129 06:57:19.357281 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config podName:08c15cf8-f386-428a-a94a-c33598b182a9 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:21.35727396 +0000 UTC m=+1327.731721570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config") pod "ovsdbserver-sb-0" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9") : configmap "ovndbcluster-sb-config" not found Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.357990 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02965a93-9a4c-4118-a030-0271f53a61a1-kube-api-access-k5qh2" (OuterVolumeSpecName: "kube-api-access-k5qh2") pod "02965a93-9a4c-4118-a030-0271f53a61a1" (UID: "02965a93-9a4c-4118-a030-0271f53a61a1"). InnerVolumeSpecName "kube-api-access-k5qh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.364220 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-config" (OuterVolumeSpecName: "config") pod "02965a93-9a4c-4118-a030-0271f53a61a1" (UID: "02965a93-9a4c-4118-a030-0271f53a61a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.365945 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-scripts" (OuterVolumeSpecName: "scripts") pod "02965a93-9a4c-4118-a030-0271f53a61a1" (UID: "02965a93-9a4c-4118-a030-0271f53a61a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:19 crc kubenswrapper[5017]: I0129 06:57:19.366498 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "02965a93-9a4c-4118-a030-0271f53a61a1" (UID: "02965a93-9a4c-4118-a030-0271f53a61a1"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.394589 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-t6sx7"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.395116 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd151c3-f255-4647-a923-3176a7dae25a-kube-api-access-qp29b" (OuterVolumeSpecName: "kube-api-access-qp29b") pod "abd151c3-f255-4647-a923-3176a7dae25a" (UID: "abd151c3-f255-4647-a923-3176a7dae25a"). InnerVolumeSpecName "kube-api-access-qp29b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.406927 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" containerID="cri-o://f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" gracePeriod=29 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.407335 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c653a323-c9c2-42f2-a2af-125828234475-kube-api-access-ncf76" (OuterVolumeSpecName: "kube-api-access-ncf76") pod "c653a323-c9c2-42f2-a2af-125828234475" (UID: "c653a323-c9c2-42f2-a2af-125828234475"). InnerVolumeSpecName "kube-api-access-ncf76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.439186 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "abd151c3-f255-4647-a923-3176a7dae25a" (UID: "abd151c3-f255-4647-a923-3176a7dae25a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.441324 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5qh2\" (UniqueName: \"kubernetes.io/projected/02965a93-9a4c-4118-a030-0271f53a61a1-kube-api-access-k5qh2\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.442492 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncf76\" (UniqueName: \"kubernetes.io/projected/c653a323-c9c2-42f2-a2af-125828234475-kube-api-access-ncf76\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.442511 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.442522 5017 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.442533 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02965a93-9a4c-4118-a030-0271f53a61a1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.442542 5017 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.442552 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp29b\" (UniqueName: \"kubernetes.io/projected/abd151c3-f255-4647-a923-3176a7dae25a-kube-api-access-qp29b\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.442563 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abd151c3-f255-4647-a923-3176a7dae25a" (UID: "abd151c3-f255-4647-a923-3176a7dae25a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.488993 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-381c-account-create-update-ngzmx"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.507203 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:19.509419 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735 is running failed: container process not found" containerID="8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:19.516183 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735 is running failed: container process not found" containerID="8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.516344 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9p92z"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.520236 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9p92z"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:19.528475 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735 is running failed: container process not found" containerID="8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:19.528564 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerName="ovsdbserver-nb" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.536709 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.544682 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.569322 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.569733 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-log" containerID="cri-o://c43149c8d6f94b05d7adff740855d48a649c2f7ea9f1f958ed0221ac67602ca1" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.570510 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-metadata" containerID="cri-o://5c19de805cb364596b5009993949de148a0ae873176b17c0e742ef83f5bf9bd2" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.652459 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-fgmwl"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.664015 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02965a93-9a4c-4118-a030-0271f53a61a1" (UID: "02965a93-9a4c-4118-a030-0271f53a61a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.700894 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-fgmwl"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.716492 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6lvps"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.735003 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0fca-account-create-update-7k6jq"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.742585 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c653a323-c9c2-42f2-a2af-125828234475" (UID: "c653a323-c9c2-42f2-a2af-125828234475"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.755870 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.755908 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.759477 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c653a323-c9c2-42f2-a2af-125828234475" (UID: "c653a323-c9c2-42f2-a2af-125828234475"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.762000 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c653a323-c9c2-42f2-a2af-125828234475" (UID: "c653a323-c9c2-42f2-a2af-125828234475"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.781061 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6lvps"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.787854 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-59754c55b6-52c5s"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.788193 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" podUID="da406cff-454a-4287-a409-5ad51c535649" containerName="barbican-keystone-listener-log" containerID="cri-o://ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.788634 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" podUID="da406cff-454a-4287-a409-5ad51c535649" containerName="barbican-keystone-listener" containerID="cri-o://2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.792784 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-config" (OuterVolumeSpecName: "config") pod "c653a323-c9c2-42f2-a2af-125828234475" (UID: "c653a323-c9c2-42f2-a2af-125828234475"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.795928 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7dd895bb69-2ngwr"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.796208 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7dd895bb69-2ngwr" podUID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerName="barbican-worker-log" containerID="cri-o://97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.796658 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7dd895bb69-2ngwr" podUID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerName="barbican-worker" containerID="cri-o://d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.796668 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c653a323-c9c2-42f2-a2af-125828234475" (UID: "c653a323-c9c2-42f2-a2af-125828234475"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.807359 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-q6477"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:19.816688 5017 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 29 06:57:20 crc kubenswrapper[5017]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 06:57:20 crc kubenswrapper[5017]: + source /usr/local/bin/container-scripts/functions Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNBridge=br-int Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNRemote=tcp:localhost:6642 Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNEncapType=geneve Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNAvailabilityZones= Jan 29 06:57:20 crc kubenswrapper[5017]: ++ EnableChassisAsGateway=true Jan 29 06:57:20 crc kubenswrapper[5017]: ++ PhysicalNetworks= Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNHostName= Jan 29 06:57:20 crc kubenswrapper[5017]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 06:57:20 crc kubenswrapper[5017]: ++ ovs_dir=/var/lib/openvswitch Jan 29 06:57:20 crc kubenswrapper[5017]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 06:57:20 crc kubenswrapper[5017]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 06:57:20 crc kubenswrapper[5017]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 06:57:20 crc kubenswrapper[5017]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 06:57:20 crc kubenswrapper[5017]: + sleep 0.5 Jan 29 06:57:20 crc kubenswrapper[5017]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 06:57:20 crc kubenswrapper[5017]: + sleep 0.5 Jan 29 06:57:20 crc kubenswrapper[5017]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 06:57:20 crc kubenswrapper[5017]: + cleanup_ovsdb_server_semaphore Jan 29 06:57:20 crc kubenswrapper[5017]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 06:57:20 crc kubenswrapper[5017]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 06:57:20 crc kubenswrapper[5017]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-mrhnf" message=< Jan 29 06:57:20 crc kubenswrapper[5017]: Exiting ovsdb-server (5) [ OK ] Jan 29 06:57:20 crc kubenswrapper[5017]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 06:57:20 crc kubenswrapper[5017]: + source /usr/local/bin/container-scripts/functions Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNBridge=br-int Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNRemote=tcp:localhost:6642 Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNEncapType=geneve Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNAvailabilityZones= Jan 29 06:57:20 crc kubenswrapper[5017]: ++ EnableChassisAsGateway=true Jan 29 06:57:20 crc kubenswrapper[5017]: ++ PhysicalNetworks= Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNHostName= Jan 29 06:57:20 crc kubenswrapper[5017]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 06:57:20 crc kubenswrapper[5017]: ++ ovs_dir=/var/lib/openvswitch Jan 29 06:57:20 crc kubenswrapper[5017]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 06:57:20 crc kubenswrapper[5017]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 06:57:20 crc kubenswrapper[5017]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 06:57:20 crc kubenswrapper[5017]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 06:57:20 crc kubenswrapper[5017]: + sleep 0.5 Jan 29 06:57:20 crc kubenswrapper[5017]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 06:57:20 crc kubenswrapper[5017]: + sleep 0.5 Jan 29 06:57:20 crc kubenswrapper[5017]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 06:57:20 crc kubenswrapper[5017]: + cleanup_ovsdb_server_semaphore Jan 29 06:57:20 crc kubenswrapper[5017]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 06:57:20 crc kubenswrapper[5017]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 06:57:20 crc kubenswrapper[5017]: > Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:19.816750 5017 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 29 06:57:20 crc kubenswrapper[5017]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 06:57:20 crc kubenswrapper[5017]: + source /usr/local/bin/container-scripts/functions Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNBridge=br-int Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNRemote=tcp:localhost:6642 Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNEncapType=geneve Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNAvailabilityZones= Jan 29 06:57:20 crc kubenswrapper[5017]: ++ EnableChassisAsGateway=true Jan 29 06:57:20 crc kubenswrapper[5017]: ++ PhysicalNetworks= Jan 29 06:57:20 crc kubenswrapper[5017]: ++ OVNHostName= Jan 29 06:57:20 crc kubenswrapper[5017]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 06:57:20 crc kubenswrapper[5017]: ++ ovs_dir=/var/lib/openvswitch Jan 29 06:57:20 crc kubenswrapper[5017]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 06:57:20 crc kubenswrapper[5017]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 06:57:20 crc kubenswrapper[5017]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 06:57:20 crc kubenswrapper[5017]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 06:57:20 crc kubenswrapper[5017]: + sleep 0.5 Jan 29 06:57:20 crc kubenswrapper[5017]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 06:57:20 crc kubenswrapper[5017]: + sleep 0.5 Jan 29 06:57:20 crc kubenswrapper[5017]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 06:57:20 crc kubenswrapper[5017]: + cleanup_ovsdb_server_semaphore Jan 29 06:57:20 crc kubenswrapper[5017]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 06:57:20 crc kubenswrapper[5017]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 06:57:20 crc kubenswrapper[5017]: > pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" containerID="cri-o://1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.816811 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" containerID="cri-o://1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" gracePeriod=28 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.819234 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.819610 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-log" containerID="cri-o://e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.820331 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-api" containerID="cri-o://2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.828983 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerName="rabbitmq" containerID="cri-o://31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a" gracePeriod=604800 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.844343 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-544777f6b8-l4dw8"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.844702 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-544777f6b8-l4dw8" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api-log" containerID="cri-o://9a97268bf48d3ec39b862fe626a036c8b8bf71b7f2894c8b7bf80ae0f1be7da0" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.845156 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-544777f6b8-l4dw8" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api" containerID="cri-o://290c7cb878f017a867ee8ae761d80813ae152cf14f4cb08011870623faa5a09c" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.849255 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.849421 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="73788076-4208-4f0f-8c66-95ef1bfb28b6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.857974 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.858010 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.858021 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.858029 5017 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c653a323-c9c2-42f2-a2af-125828234475-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:19.858108 5017 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:19.858172 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts podName:0adabddf-74aa-416a-afef-b24b39897f9c nodeName:}" failed. No retries permitted until 2026-01-29 06:57:20.858154927 +0000 UTC m=+1327.232602537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts") pod "nova-cell1-fd15-account-create-update-q6477" (UID: "0adabddf-74aa-416a-afef-b24b39897f9c") : configmap "openstack-cell1-scripts" not found Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.879086 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "abd151c3-f255-4647-a923-3176a7dae25a" (UID: "abd151c3-f255-4647-a923-3176a7dae25a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.895817 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "02965a93-9a4c-4118-a030-0271f53a61a1" (UID: "02965a93-9a4c-4118-a030-0271f53a61a1"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.961760 5017 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.962150 5017 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/abd151c3-f255-4647-a923-3176a7dae25a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.983790 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7tq6"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.985675 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "02965a93-9a4c-4118-a030-0271f53a61a1" (UID: "02965a93-9a4c-4118-a030-0271f53a61a1"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.985779 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-bc7969485-9cbzw" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.170:8080/healthcheck\": dial tcp 10.217.0.170:8080: connect: connection refused" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:19.985847 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-bc7969485-9cbzw" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.170:8080/healthcheck\": dial tcp 10.217.0.170:8080: connect: connection refused" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.041499 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.041763 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="18edd5b3-27eb-43f3-8d6b-03490c243c78" containerName="nova-cell1-conductor-conductor" containerID="cri-o://cd271b62ca4015e030ca07e0ae6b52baec3e519d53550f369d0cfbcc931e68fd" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.047436 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bnw77_572c6985-85a2-4a6d-8581-75b8c6b87322/openstack-network-exporter/0.log" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.047516 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.061500 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_08c15cf8-f386-428a-a94a-c33598b182a9/ovsdbserver-sb/0.log" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.061580 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.064348 5017 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/02965a93-9a4c-4118-a030-0271f53a61a1-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.066399 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7tq6"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.074845 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.087092 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.087386 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" containerName="nova-cell0-conductor-conductor" containerID="cri-o://86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.097074 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dn6j"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.127833 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8dn6j"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.133833 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.134140 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" containerName="nova-scheduler-scheduler" containerID="cri-o://baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166113 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" containerName="galera" containerID="cri-o://0762eb515121f428ad670c99dbbc9df148f038b1dfcf835e5b82100fdb4c0a75" gracePeriod=30 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166456 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdb-rundir\") pod \"08c15cf8-f386-428a-a94a-c33598b182a9\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166534 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-metrics-certs-tls-certs\") pod \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166588 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovs-rundir\") pod \"572c6985-85a2-4a6d-8581-75b8c6b87322\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166622 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-metrics-certs-tls-certs\") pod \"572c6985-85a2-4a6d-8581-75b8c6b87322\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166651 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d7hl\" (UniqueName: \"kubernetes.io/projected/572c6985-85a2-4a6d-8581-75b8c6b87322-kube-api-access-4d7hl\") pod \"572c6985-85a2-4a6d-8581-75b8c6b87322\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166689 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-combined-ca-bundle\") pod \"572c6985-85a2-4a6d-8581-75b8c6b87322\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166751 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdbserver-sb-tls-certs\") pod \"08c15cf8-f386-428a-a94a-c33598b182a9\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166790 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-combined-ca-bundle\") pod \"08c15cf8-f386-428a-a94a-c33598b182a9\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166844 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "572c6985-85a2-4a6d-8581-75b8c6b87322" (UID: "572c6985-85a2-4a6d-8581-75b8c6b87322"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166869 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-metrics-certs-tls-certs\") pod \"08c15cf8-f386-428a-a94a-c33598b182a9\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166915 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config\") pod \"08c15cf8-f386-428a-a94a-c33598b182a9\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166940 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572c6985-85a2-4a6d-8581-75b8c6b87322-config\") pod \"572c6985-85a2-4a6d-8581-75b8c6b87322\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.166998 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167027 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-combined-ca-bundle\") pod \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167078 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovn-rundir\") pod \"572c6985-85a2-4a6d-8581-75b8c6b87322\" (UID: \"572c6985-85a2-4a6d-8581-75b8c6b87322\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167113 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdbserver-nb-tls-certs\") pod \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167134 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts\") pod \"08c15cf8-f386-428a-a94a-c33598b182a9\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167172 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"08c15cf8-f386-428a-a94a-c33598b182a9\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167197 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdb-rundir\") pod \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167242 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-scripts\") pod \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167267 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2pk9\" (UniqueName: \"kubernetes.io/projected/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-kube-api-access-w2pk9\") pod \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167288 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-config\") pod \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\" (UID: \"58a02d03-f3a8-4193-ba1d-623ecaa62fe9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167311 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9v65\" (UniqueName: \"kubernetes.io/projected/08c15cf8-f386-428a-a94a-c33598b182a9-kube-api-access-x9v65\") pod \"08c15cf8-f386-428a-a94a-c33598b182a9\" (UID: \"08c15cf8-f386-428a-a94a-c33598b182a9\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.167633 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "08c15cf8-f386-428a-a94a-c33598b182a9" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.168371 5017 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.168425 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.168842 5017 generic.go:334] "Generic (PLEG): container finished" podID="919074d0-f7a7-4d64-8339-744730688c4f" containerID="9a97268bf48d3ec39b862fe626a036c8b8bf71b7f2894c8b7bf80ae0f1be7da0" exitCode=143 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.168897 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-544777f6b8-l4dw8" event={"ID":"919074d0-f7a7-4d64-8339-744730688c4f","Type":"ContainerDied","Data":"9a97268bf48d3ec39b862fe626a036c8b8bf71b7f2894c8b7bf80ae0f1be7da0"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.172276 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c15cf8-f386-428a-a94a-c33598b182a9-kube-api-access-x9v65" (OuterVolumeSpecName: "kube-api-access-x9v65") pod "08c15cf8-f386-428a-a94a-c33598b182a9" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9"). InnerVolumeSpecName "kube-api-access-x9v65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.172671 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "58a02d03-f3a8-4193-ba1d-623ecaa62fe9" (UID: "58a02d03-f3a8-4193-ba1d-623ecaa62fe9"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.173047 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572c6985-85a2-4a6d-8581-75b8c6b87322-kube-api-access-4d7hl" (OuterVolumeSpecName: "kube-api-access-4d7hl") pod "572c6985-85a2-4a6d-8581-75b8c6b87322" (UID: "572c6985-85a2-4a6d-8581-75b8c6b87322"). InnerVolumeSpecName "kube-api-access-4d7hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.173301 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-scripts" (OuterVolumeSpecName: "scripts") pod "58a02d03-f3a8-4193-ba1d-623ecaa62fe9" (UID: "58a02d03-f3a8-4193-ba1d-623ecaa62fe9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.173783 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-config" (OuterVolumeSpecName: "config") pod "58a02d03-f3a8-4193-ba1d-623ecaa62fe9" (UID: "58a02d03-f3a8-4193-ba1d-623ecaa62fe9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.174327 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "572c6985-85a2-4a6d-8581-75b8c6b87322" (UID: "572c6985-85a2-4a6d-8581-75b8c6b87322"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.176094 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts" (OuterVolumeSpecName: "scripts") pod "08c15cf8-f386-428a-a94a-c33598b182a9" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.176175 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "08c15cf8-f386-428a-a94a-c33598b182a9" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.176205 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-kube-api-access-w2pk9" (OuterVolumeSpecName: "kube-api-access-w2pk9") pod "58a02d03-f3a8-4193-ba1d-623ecaa62fe9" (UID: "58a02d03-f3a8-4193-ba1d-623ecaa62fe9"). InnerVolumeSpecName "kube-api-access-w2pk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.176657 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config" (OuterVolumeSpecName: "config") pod "08c15cf8-f386-428a-a94a-c33598b182a9" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.176834 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/572c6985-85a2-4a6d-8581-75b8c6b87322-config" (OuterVolumeSpecName: "config") pod "572c6985-85a2-4a6d-8581-75b8c6b87322" (UID: "572c6985-85a2-4a6d-8581-75b8c6b87322"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.186284 5017 generic.go:334] "Generic (PLEG): container finished" podID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerID="e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b" exitCode=143 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.186399 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3e4a4d-ee9a-4345-b8e5-a40416771caf","Type":"ContainerDied","Data":"e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.189687 5017 generic.go:334] "Generic (PLEG): container finished" podID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerID="351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.189733 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9cd4b645-x8pg4" event={"ID":"c4fe6966-2467-4c3b-b907-d3a8e88eb497","Type":"ContainerDied","Data":"351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.194132 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "58a02d03-f3a8-4193-ba1d-623ecaa62fe9" (UID: "58a02d03-f3a8-4193-ba1d-623ecaa62fe9"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.202699 5017 generic.go:334] "Generic (PLEG): container finished" podID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerID="97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3" exitCode=143 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.202771 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dd895bb69-2ngwr" event={"ID":"c118297d-1c5d-4234-930c-9c0e6b5bb29b","Type":"ContainerDied","Data":"97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.206217 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_08c15cf8-f386-428a-a94a-c33598b182a9/ovsdbserver-sb/0.log" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.206323 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"08c15cf8-f386-428a-a94a-c33598b182a9","Type":"ContainerDied","Data":"b218be6dd477f68645c3bee8616fad21795a1b638eaff7f69a46aa2776a00322"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.206375 5017 scope.go:117] "RemoveContainer" containerID="970018d4bd2130b0277d105c14cb252f0b9fe0e15de053637b5663a6a7609e01" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.206400 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.209780 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_02965a93-9a4c-4118-a030-0271f53a61a1/ovn-northd/0.log" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.209842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"02965a93-9a4c-4118-a030-0271f53a61a1","Type":"ContainerDied","Data":"533ac51d2177f27c0e78ae10a85fb56411098e1ca3096e2098cdf5b6a1f1ab71"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.209945 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.217456 5017 generic.go:334] "Generic (PLEG): container finished" podID="da406cff-454a-4287-a409-5ad51c535649" containerID="ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f" exitCode=143 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.217508 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" event={"ID":"da406cff-454a-4287-a409-5ad51c535649","Type":"ContainerDied","Data":"ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.219853 5017 generic.go:334] "Generic (PLEG): container finished" podID="a0801349-3235-495b-9747-8ce025aad149" containerID="9d3b90e2a526d03bc001996d30c041b59b2169975c9838eadd4d27870c43ad13" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.219868 5017 generic.go:334] "Generic (PLEG): container finished" podID="a0801349-3235-495b-9747-8ce025aad149" containerID="dc162fbc000aa89fe180adad58f6d207b657135b6fb5dd62ecbf93d7c4e1bbe0" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.219904 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bc7969485-9cbzw" event={"ID":"a0801349-3235-495b-9747-8ce025aad149","Type":"ContainerDied","Data":"9d3b90e2a526d03bc001996d30c041b59b2169975c9838eadd4d27870c43ad13"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.219920 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bc7969485-9cbzw" event={"ID":"a0801349-3235-495b-9747-8ce025aad149","Type":"ContainerDied","Data":"dc162fbc000aa89fe180adad58f6d207b657135b6fb5dd62ecbf93d7c4e1bbe0"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.225219 5017 generic.go:334] "Generic (PLEG): container finished" podID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerID="c43149c8d6f94b05d7adff740855d48a649c2f7ea9f1f958ed0221ac67602ca1" exitCode=143 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.225280 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d94b8e3-f4a6-4fc2-af59-57b33254cd74","Type":"ContainerDied","Data":"c43149c8d6f94b05d7adff740855d48a649c2f7ea9f1f958ed0221ac67602ca1"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.236008 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e41c27f8-0c27-4e3d-83b1-62a61abb4faf","Type":"ContainerDied","Data":"e97d4813efcf24ccacbdbd3a38be06d62a0d548850293f33dfd1414e7cf3dbe7"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.235927 5017 generic.go:334] "Generic (PLEG): container finished" podID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerID="e97d4813efcf24ccacbdbd3a38be06d62a0d548850293f33dfd1414e7cf3dbe7" exitCode=143 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.248698 5017 generic.go:334] "Generic (PLEG): container finished" podID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerID="999837a7b08863c4bad372817a69589db1cc60b86b13c6914e11866e71643157" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.249151 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9c69fc6f-43e9-4fe5-b964-8db89e6ab354","Type":"ContainerDied","Data":"999837a7b08863c4bad372817a69589db1cc60b86b13c6914e11866e71643157"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.251216 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bnw77_572c6985-85a2-4a6d-8581-75b8c6b87322/openstack-network-exporter/0.log" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.251295 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bnw77" event={"ID":"572c6985-85a2-4a6d-8581-75b8c6b87322","Type":"ContainerDied","Data":"6d0b0faeb9116bb5d02f252746b7a3998741e5b939a484e8c9e14bcb4d4f7969"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.251394 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bnw77" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.253946 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.255510 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"58a02d03-f3a8-4193-ba1d-623ecaa62fe9","Type":"ContainerDied","Data":"257c03e8b266b9980b73d4a25d25ca786ed3703bb6b78721aadea28a4e400741"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.255568 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.258397 5017 generic.go:334] "Generic (PLEG): container finished" podID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerID="9faf0d173c0b59296fed359030686d4a096d13b44a22d7aecdca4136e3a15a7a" exitCode=143 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.258451 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df7814f-338e-40fb-95aa-f93dfa8307d6","Type":"ContainerDied","Data":"9faf0d173c0b59296fed359030686d4a096d13b44a22d7aecdca4136e3a15a7a"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.262915 5017 scope.go:117] "RemoveContainer" containerID="96a4605d01e3b489afb2e1c96ec4c37e5d1cf81a538304c82d2cf924de1a26ac" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.266133 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271608 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271641 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271671 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271686 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271699 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2pk9\" (UniqueName: \"kubernetes.io/projected/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-kube-api-access-w2pk9\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271710 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271722 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9v65\" (UniqueName: \"kubernetes.io/projected/08c15cf8-f386-428a-a94a-c33598b182a9-kube-api-access-x9v65\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271731 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d7hl\" (UniqueName: \"kubernetes.io/projected/572c6985-85a2-4a6d-8581-75b8c6b87322-kube-api-access-4d7hl\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271742 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c15cf8-f386-428a-a94a-c33598b182a9-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271750 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572c6985-85a2-4a6d-8581-75b8c6b87322-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271767 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.271779 5017 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/572c6985-85a2-4a6d-8581-75b8c6b87322-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.273176 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.273318 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data podName:d30b013f-453f-4282-8b22-2a5270027828 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:24.27329584 +0000 UTC m=+1330.647743450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data") pod "rabbitmq-cell1-server-0" (UID: "d30b013f-453f-4282-8b22-2a5270027828") : configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282773 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282828 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282839 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282847 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282855 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282863 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282872 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282880 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282888 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282895 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282905 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282912 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282920 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282928 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.282806 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283034 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283053 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283063 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283075 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283085 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283095 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283106 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283116 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283126 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283137 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283147 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283156 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.283166 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.284195 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58a02d03-f3a8-4193-ba1d-623ecaa62fe9" (UID: "58a02d03-f3a8-4193-ba1d-623ecaa62fe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.287231 5017 generic.go:334] "Generic (PLEG): container finished" podID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" exitCode=0 Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.287314 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mrhnf" event={"ID":"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b","Type":"ContainerDied","Data":"1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.291013 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" event={"ID":"c653a323-c9c2-42f2-a2af-125828234475","Type":"ContainerDied","Data":"05c50afbe8bcc22f3be698ed5b03d80360ec311ac61f2d7ac2210a0ba7051538"} Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.291117 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.331178 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "572c6985-85a2-4a6d-8581-75b8c6b87322" (UID: "572c6985-85a2-4a6d-8581-75b8c6b87322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.362129 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d4a207-f6d2-48ce-9065-b3438a37b46d" path="/var/lib/kubelet/pods/15d4a207-f6d2-48ce-9065-b3438a37b46d/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.362674 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23be4105-cd73-4c7f-b967-8cac7cf8451d" path="/var/lib/kubelet/pods/23be4105-cd73-4c7f-b967-8cac7cf8451d/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.363760 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3505bbb1-d190-470a-84e7-18e9b3330a2f" path="/var/lib/kubelet/pods/3505bbb1-d190-470a-84e7-18e9b3330a2f/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.366028 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4991fdbc-2d83-45dd-91a3-b312347ff317" path="/var/lib/kubelet/pods/4991fdbc-2d83-45dd-91a3-b312347ff317/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.366776 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c78bbb-a5a9-45e4-8825-a4d05dfa23b3" path="/var/lib/kubelet/pods/66c78bbb-a5a9-45e4-8825-a4d05dfa23b3/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.367747 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8600c4aa-101a-4803-b8c8-7313e2742c6c" path="/var/lib/kubelet/pods/8600c4aa-101a-4803-b8c8-7313e2742c6c/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.368904 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08c15cf8-f386-428a-a94a-c33598b182a9" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.370165 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cccdb9f-b531-4534-8adc-6a64d16dd3fe" path="/var/lib/kubelet/pods/8cccdb9f-b531-4534-8adc-6a64d16dd3fe/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.371259 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2f340e-2e9f-4711-a13b-1618bd1fbec4" path="/var/lib/kubelet/pods/8e2f340e-2e9f-4711-a13b-1618bd1fbec4/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.372095 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae" path="/var/lib/kubelet/pods/a0b48b8f-8c59-4c86-a4fd-b1b408d6dbae/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.374394 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd151c3-f255-4647-a923-3176a7dae25a" path="/var/lib/kubelet/pods/abd151c3-f255-4647-a923-3176a7dae25a/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.376372 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afac06a5-272c-48d3-8916-775a5bd3eb54" path="/var/lib/kubelet/pods/afac06a5-272c-48d3-8916-775a5bd3eb54/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.378903 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b192039c-4ffa-451a-8149-e15c107ac8f2" path="/var/lib/kubelet/pods/b192039c-4ffa-451a-8149-e15c107ac8f2/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.379928 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44df6b3-8b1f-4004-9629-46412a17cbf7" path="/var/lib/kubelet/pods/c44df6b3-8b1f-4004-9629-46412a17cbf7/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.382629 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c77a9d60-239f-4a28-b30a-6a9c4bdecb2b" path="/var/lib/kubelet/pods/c77a9d60-239f-4a28-b30a-6a9c4bdecb2b/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.383233 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e892cce7-8414-428d-a4f2-7aaff4b6bdd9" path="/var/lib/kubelet/pods/e892cce7-8414-428d-a4f2-7aaff4b6bdd9/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.383448 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.383489 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.383500 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.386291 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f541766b-7fae-4f87-8c2b-97e269d15c84" path="/var/lib/kubelet/pods/f541766b-7fae-4f87-8c2b-97e269d15c84/volumes" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.387889 5017 scope.go:117] "RemoveContainer" containerID="2d5d0c8760d913b9ab3eeaa636e31dd474ed2dad3b92862aa8e197299b972bee" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.404469 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.413378 5017 scope.go:117] "RemoveContainer" containerID="5856b1139cb9ac964f5c7a59c84bd77817a7664b0ef60849ed25c5857c226d37" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.420223 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "572c6985-85a2-4a6d-8581-75b8c6b87322" (UID: "572c6985-85a2-4a6d-8581-75b8c6b87322"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.437494 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.445732 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "58a02d03-f3a8-4193-ba1d-623ecaa62fe9" (UID: "58a02d03-f3a8-4193-ba1d-623ecaa62fe9"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.447314 5017 scope.go:117] "RemoveContainer" containerID="5d0c3be2b0978eb8f6117644b27656fb862fbe247178603731b87946cc5367a6" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.456289 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.494104 5017 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572c6985-85a2-4a6d-8581-75b8c6b87322-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.494161 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.494183 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.494194 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.506674 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "08c15cf8-f386-428a-a94a-c33598b182a9" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.511082 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "58a02d03-f3a8-4193-ba1d-623ecaa62fe9" (UID: "58a02d03-f3a8-4193-ba1d-623ecaa62fe9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.531939 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "08c15cf8-f386-428a-a94a-c33598b182a9" (UID: "08c15cf8-f386-428a-a94a-c33598b182a9"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.596445 5017 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a02d03-f3a8-4193-ba1d-623ecaa62fe9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.596481 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.596493 5017 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c15cf8-f386-428a-a94a-c33598b182a9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.605599 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.606895 5017 scope.go:117] "RemoveContainer" containerID="937202b40c1e1fb6b564b0a310d47bb37664160e11586ea04719f1fc9662dc75" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.633797 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-bnw77"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.637072 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-bnw77"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.645668 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.645938 5017 scope.go:117] "RemoveContainer" containerID="dd1494bb8f06d376a772d0890f42484f96047c14209cf64f5a4fb14363143583" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.654572 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.698163 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-run-httpd\") pod \"a0801349-3235-495b-9747-8ce025aad149\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.698327 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-combined-ca-bundle\") pod \"a0801349-3235-495b-9747-8ce025aad149\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.698396 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-public-tls-certs\") pod \"a0801349-3235-495b-9747-8ce025aad149\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.698416 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-internal-tls-certs\") pod \"a0801349-3235-495b-9747-8ce025aad149\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.698436 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq2t6\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-kube-api-access-qq2t6\") pod \"a0801349-3235-495b-9747-8ce025aad149\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.698674 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-etc-swift\") pod \"a0801349-3235-495b-9747-8ce025aad149\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.698723 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-config-data\") pod \"a0801349-3235-495b-9747-8ce025aad149\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.698763 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-log-httpd\") pod \"a0801349-3235-495b-9747-8ce025aad149\" (UID: \"a0801349-3235-495b-9747-8ce025aad149\") " Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.699653 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a0801349-3235-495b-9747-8ce025aad149" (UID: "a0801349-3235-495b-9747-8ce025aad149"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.699840 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a0801349-3235-495b-9747-8ce025aad149" (UID: "a0801349-3235-495b-9747-8ce025aad149"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.705474 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-kube-api-access-qq2t6" (OuterVolumeSpecName: "kube-api-access-qq2t6") pod "a0801349-3235-495b-9747-8ce025aad149" (UID: "a0801349-3235-495b-9747-8ce025aad149"). InnerVolumeSpecName "kube-api-access-qq2t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.706238 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a0801349-3235-495b-9747-8ce025aad149" (UID: "a0801349-3235-495b-9747-8ce025aad149"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.757692 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0801349-3235-495b-9747-8ce025aad149" (UID: "a0801349-3235-495b-9747-8ce025aad149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.758435 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-config-data" (OuterVolumeSpecName: "config-data") pod "a0801349-3235-495b-9747-8ce025aad149" (UID: "a0801349-3235-495b-9747-8ce025aad149"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.758488 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a0801349-3235-495b-9747-8ce025aad149" (UID: "a0801349-3235-495b-9747-8ce025aad149"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.773832 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.779215 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.782704 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.782783 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" containerName="nova-scheduler-scheduler" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.789266 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a0801349-3235-495b-9747-8ce025aad149" (UID: "a0801349-3235-495b-9747-8ce025aad149"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.804267 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.804300 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.804310 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0801349-3235-495b-9747-8ce025aad149-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.804338 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.804352 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.804363 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0801349-3235-495b-9747-8ce025aad149-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.804375 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq2t6\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-kube-api-access-qq2t6\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.804384 5017 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0801349-3235-495b-9747-8ce025aad149-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.805260 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.805519 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data podName:5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a nodeName:}" failed. No retries permitted until 2026-01-29 06:57:24.805489813 +0000 UTC m=+1331.179937463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data") pod "rabbitmq-server-0" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a") : configmap "rabbitmq-config-data" not found Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.837950 5017 scope.go:117] "RemoveContainer" containerID="8b0e655613ff88e4336cd67c78361a1d00ccdd95612e304ec5f2445083d8f735" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.840484 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-381c-account-create-update-ngzmx"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.855934 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.865377 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.885981 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-01d7-account-create-update-n9thm"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.887133 5017 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 06:57:20 crc kubenswrapper[5017]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: if [ -n "barbican" ]; then Jan 29 06:57:20 crc kubenswrapper[5017]: GRANT_DATABASE="barbican" Jan 29 06:57:20 crc kubenswrapper[5017]: else Jan 29 06:57:20 crc kubenswrapper[5017]: GRANT_DATABASE="*" Jan 29 06:57:20 crc kubenswrapper[5017]: fi Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: # going for maximum compatibility here: Jan 29 06:57:20 crc kubenswrapper[5017]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 06:57:20 crc kubenswrapper[5017]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 06:57:20 crc kubenswrapper[5017]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 06:57:20 crc kubenswrapper[5017]: # support updates Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: $MYSQL_CMD < logger="UnhandledError" Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.888403 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-381c-account-create-update-ngzmx" podUID="66475828-326a-4b57-baea-e209e519d639" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.893656 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2d72-account-create-update-hj9h9"] Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.906710 5017 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.906812 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts podName:0adabddf-74aa-416a-afef-b24b39897f9c nodeName:}" failed. No retries permitted until 2026-01-29 06:57:22.906788493 +0000 UTC m=+1329.281236103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts") pod "nova-cell1-fd15-account-create-update-q6477" (UID: "0adabddf-74aa-416a-afef-b24b39897f9c") : configmap "openstack-cell1-scripts" not found Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.915726 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4qvq2"] Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.921654 5017 scope.go:117] "RemoveContainer" containerID="fa179775b1ca9fee80f8f1451a4e94f0855c85741c538bb2b667dddb2502f32c" Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.927781 5017 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 06:57:20 crc kubenswrapper[5017]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: if [ -n "cinder" ]; then Jan 29 06:57:20 crc kubenswrapper[5017]: GRANT_DATABASE="cinder" Jan 29 06:57:20 crc kubenswrapper[5017]: else Jan 29 06:57:20 crc kubenswrapper[5017]: GRANT_DATABASE="*" Jan 29 06:57:20 crc kubenswrapper[5017]: fi Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: # going for maximum compatibility here: Jan 29 06:57:20 crc kubenswrapper[5017]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 06:57:20 crc kubenswrapper[5017]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 06:57:20 crc kubenswrapper[5017]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 06:57:20 crc kubenswrapper[5017]: # support updates Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: $MYSQL_CMD < logger="UnhandledError" Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.929754 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-01d7-account-create-update-n9thm" podUID="615b2757-5eab-4454-95da-663755846932" Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.947452 5017 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 06:57:20 crc kubenswrapper[5017]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: if [ -n "neutron" ]; then Jan 29 06:57:20 crc kubenswrapper[5017]: GRANT_DATABASE="neutron" Jan 29 06:57:20 crc kubenswrapper[5017]: else Jan 29 06:57:20 crc kubenswrapper[5017]: GRANT_DATABASE="*" Jan 29 06:57:20 crc kubenswrapper[5017]: fi Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: # going for maximum compatibility here: Jan 29 06:57:20 crc kubenswrapper[5017]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 06:57:20 crc kubenswrapper[5017]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 06:57:20 crc kubenswrapper[5017]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 06:57:20 crc kubenswrapper[5017]: # support updates Jan 29 06:57:20 crc kubenswrapper[5017]: Jan 29 06:57:20 crc kubenswrapper[5017]: $MYSQL_CMD < logger="UnhandledError" Jan 29 06:57:20 crc kubenswrapper[5017]: E0129 06:57:20.949571 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-2d72-account-create-update-hj9h9" podUID="7133c436-5656-4d57-aca3-64e9542ef299" Jan 29 06:57:20 crc kubenswrapper[5017]: I0129 06:57:20.986895 5017 scope.go:117] "RemoveContainer" containerID="51ea0b6a63df696a6c8299801190aa0bb86841398e2ad16680fb32723e8af016" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.077748 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0fca-account-create-update-7k6jq"] Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.103650 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-q6477"] Jan 29 06:57:21 crc kubenswrapper[5017]: E0129 06:57:21.108791 5017 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 06:57:21 crc kubenswrapper[5017]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: if [ -n "nova_api" ]; then Jan 29 06:57:21 crc kubenswrapper[5017]: GRANT_DATABASE="nova_api" Jan 29 06:57:21 crc kubenswrapper[5017]: else Jan 29 06:57:21 crc kubenswrapper[5017]: GRANT_DATABASE="*" Jan 29 06:57:21 crc kubenswrapper[5017]: fi Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: # going for maximum compatibility here: Jan 29 06:57:21 crc kubenswrapper[5017]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 06:57:21 crc kubenswrapper[5017]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 06:57:21 crc kubenswrapper[5017]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 06:57:21 crc kubenswrapper[5017]: # support updates Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: $MYSQL_CMD < logger="UnhandledError" Jan 29 06:57:21 crc kubenswrapper[5017]: E0129 06:57:21.110783 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-0fca-account-create-update-7k6jq" podUID="c55eb047-3255-4a79-8e32-dfb786de8794" Jan 29 06:57:21 crc kubenswrapper[5017]: E0129 06:57:21.115979 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 06:57:21 crc kubenswrapper[5017]: E0129 06:57:21.117873 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 06:57:21 crc kubenswrapper[5017]: W0129 06:57:21.120919 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0adabddf_74aa_416a_afef_b24b39897f9c.slice/crio-16f4897fd47b24991b802f5e64920e0add3f24ce843a349bb27b3395ef835a27 WatchSource:0}: Error finding container 16f4897fd47b24991b802f5e64920e0add3f24ce843a349bb27b3395ef835a27: Status 404 returned error can't find the container with id 16f4897fd47b24991b802f5e64920e0add3f24ce843a349bb27b3395ef835a27 Jan 29 06:57:21 crc kubenswrapper[5017]: E0129 06:57:21.123031 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 06:57:21 crc kubenswrapper[5017]: E0129 06:57:21.123179 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" containerName="nova-cell0-conductor-conductor" Jan 29 06:57:21 crc kubenswrapper[5017]: E0129 06:57:21.124906 5017 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 06:57:21 crc kubenswrapper[5017]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: if [ -n "nova_cell1" ]; then Jan 29 06:57:21 crc kubenswrapper[5017]: GRANT_DATABASE="nova_cell1" Jan 29 06:57:21 crc kubenswrapper[5017]: else Jan 29 06:57:21 crc kubenswrapper[5017]: GRANT_DATABASE="*" Jan 29 06:57:21 crc kubenswrapper[5017]: fi Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: # going for maximum compatibility here: Jan 29 06:57:21 crc kubenswrapper[5017]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 06:57:21 crc kubenswrapper[5017]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 06:57:21 crc kubenswrapper[5017]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 06:57:21 crc kubenswrapper[5017]: # support updates Jan 29 06:57:21 crc kubenswrapper[5017]: Jan 29 06:57:21 crc kubenswrapper[5017]: $MYSQL_CMD < logger="UnhandledError" Jan 29 06:57:21 crc kubenswrapper[5017]: E0129 06:57:21.128570 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-fd15-account-create-update-q6477" podUID="0adabddf-74aa-416a-afef-b24b39897f9c" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.134746 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.211869 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-vencrypt-tls-certs\") pod \"73788076-4208-4f0f-8c66-95ef1bfb28b6\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.212004 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-nova-novncproxy-tls-certs\") pod \"73788076-4208-4f0f-8c66-95ef1bfb28b6\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.212093 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-combined-ca-bundle\") pod \"73788076-4208-4f0f-8c66-95ef1bfb28b6\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.212153 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-config-data\") pod \"73788076-4208-4f0f-8c66-95ef1bfb28b6\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.212190 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfmjf\" (UniqueName: \"kubernetes.io/projected/73788076-4208-4f0f-8c66-95ef1bfb28b6-kube-api-access-vfmjf\") pod \"73788076-4208-4f0f-8c66-95ef1bfb28b6\" (UID: \"73788076-4208-4f0f-8c66-95ef1bfb28b6\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.220118 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73788076-4208-4f0f-8c66-95ef1bfb28b6-kube-api-access-vfmjf" (OuterVolumeSpecName: "kube-api-access-vfmjf") pod "73788076-4208-4f0f-8c66-95ef1bfb28b6" (UID: "73788076-4208-4f0f-8c66-95ef1bfb28b6"). InnerVolumeSpecName "kube-api-access-vfmjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.247491 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-config-data" (OuterVolumeSpecName: "config-data") pod "73788076-4208-4f0f-8c66-95ef1bfb28b6" (UID: "73788076-4208-4f0f-8c66-95ef1bfb28b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.272476 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73788076-4208-4f0f-8c66-95ef1bfb28b6" (UID: "73788076-4208-4f0f-8c66-95ef1bfb28b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.282952 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "73788076-4208-4f0f-8c66-95ef1bfb28b6" (UID: "73788076-4208-4f0f-8c66-95ef1bfb28b6"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.288156 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "73788076-4208-4f0f-8c66-95ef1bfb28b6" (UID: "73788076-4208-4f0f-8c66-95ef1bfb28b6"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.306231 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-01d7-account-create-update-n9thm" event={"ID":"615b2757-5eab-4454-95da-663755846932","Type":"ContainerStarted","Data":"e13ee67e40b9fceb0a185a9928bd88ba54910f2652e08c3811cc313f02441060"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.311197 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0fca-account-create-update-7k6jq" event={"ID":"c55eb047-3255-4a79-8e32-dfb786de8794","Type":"ContainerStarted","Data":"33349de7a8b737bd350506c887b12f23cd1a75b09ffd41e1b5258dfed92aa034"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.314948 5017 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.315088 5017 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.315101 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.315111 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73788076-4208-4f0f-8c66-95ef1bfb28b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.315140 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfmjf\" (UniqueName: \"kubernetes.io/projected/73788076-4208-4f0f-8c66-95ef1bfb28b6-kube-api-access-vfmjf\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.360126 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-bc7969485-9cbzw" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.360445 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bc7969485-9cbzw" event={"ID":"a0801349-3235-495b-9747-8ce025aad149","Type":"ContainerDied","Data":"ada61d0fac333958afe435024c69de8af387384a657773148545d3de12159485"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.360528 5017 scope.go:117] "RemoveContainer" containerID="9d3b90e2a526d03bc001996d30c041b59b2169975c9838eadd4d27870c43ad13" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.422744 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-q6477" event={"ID":"0adabddf-74aa-416a-afef-b24b39897f9c","Type":"ContainerStarted","Data":"16f4897fd47b24991b802f5e64920e0add3f24ce843a349bb27b3395ef835a27"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.425091 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-bc7969485-9cbzw"] Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.433142 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-bc7969485-9cbzw"] Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.434704 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.439378 5017 generic.go:334] "Generic (PLEG): container finished" podID="73788076-4208-4f0f-8c66-95ef1bfb28b6" containerID="2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6" exitCode=0 Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.439559 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.441093 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73788076-4208-4f0f-8c66-95ef1bfb28b6","Type":"ContainerDied","Data":"2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.441149 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73788076-4208-4f0f-8c66-95ef1bfb28b6","Type":"ContainerDied","Data":"57b80fd0e54fbbda0581b0e548a2d554f0ef7e7c85cb96f4a5eb0b9e73292dc9"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.450789 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2d72-account-create-update-hj9h9" event={"ID":"7133c436-5656-4d57-aca3-64e9542ef299","Type":"ContainerStarted","Data":"83121bc47d884a3cc20a011c66bb0cad495ad035cdfccd1db485e9a2ec5eefdc"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.459833 5017 scope.go:117] "RemoveContainer" containerID="dc162fbc000aa89fe180adad58f6d207b657135b6fb5dd62ecbf93d7c4e1bbe0" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.474029 5017 generic.go:334] "Generic (PLEG): container finished" podID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" containerID="0762eb515121f428ad670c99dbbc9df148f038b1dfcf835e5b82100fdb4c0a75" exitCode=0 Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.474102 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ec5c09bc-f98c-4587-b4e3-ec9269c04a71","Type":"ContainerDied","Data":"0762eb515121f428ad670c99dbbc9df148f038b1dfcf835e5b82100fdb4c0a75"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.474240 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.504048 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-381c-account-create-update-ngzmx" event={"ID":"66475828-326a-4b57-baea-e209e519d639","Type":"ContainerStarted","Data":"c9d887916964ab38e1fa5ca2691cae658c46f119a5e395d629a48044f1949cca"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.517591 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-default\") pod \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.517718 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-galera-tls-certs\") pod \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.518000 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kolla-config\") pod \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.518032 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.518213 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-combined-ca-bundle\") pod \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.518295 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-generated\") pod \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.518326 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqk5q\" (UniqueName: \"kubernetes.io/projected/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kube-api-access-mqk5q\") pod \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.518410 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-operator-scripts\") pod \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\" (UID: \"ec5c09bc-f98c-4587-b4e3-ec9269c04a71\") " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.520151 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec5c09bc-f98c-4587-b4e3-ec9269c04a71" (UID: "ec5c09bc-f98c-4587-b4e3-ec9269c04a71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.520712 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "ec5c09bc-f98c-4587-b4e3-ec9269c04a71" (UID: "ec5c09bc-f98c-4587-b4e3-ec9269c04a71"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.522084 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ec5c09bc-f98c-4587-b4e3-ec9269c04a71" (UID: "ec5c09bc-f98c-4587-b4e3-ec9269c04a71"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.522168 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "ec5c09bc-f98c-4587-b4e3-ec9269c04a71" (UID: "ec5c09bc-f98c-4587-b4e3-ec9269c04a71"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.535626 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kube-api-access-mqk5q" (OuterVolumeSpecName: "kube-api-access-mqk5q") pod "ec5c09bc-f98c-4587-b4e3-ec9269c04a71" (UID: "ec5c09bc-f98c-4587-b4e3-ec9269c04a71"). InnerVolumeSpecName "kube-api-access-mqk5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.546016 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4qvq2" event={"ID":"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2","Type":"ContainerStarted","Data":"5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.546142 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4qvq2" event={"ID":"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2","Type":"ContainerStarted","Data":"6276c128869f701d011c860f78e0e60b2462638f473965139bcb3824b0dd8b96"} Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.546862 5017 scope.go:117] "RemoveContainer" containerID="5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.576406 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "ec5c09bc-f98c-4587-b4e3-ec9269c04a71" (UID: "ec5c09bc-f98c-4587-b4e3-ec9269c04a71"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.578187 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec5c09bc-f98c-4587-b4e3-ec9269c04a71" (UID: "ec5c09bc-f98c-4587-b4e3-ec9269c04a71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.624332 5017 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.624392 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.624404 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.624414 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.624429 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqk5q\" (UniqueName: \"kubernetes.io/projected/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-kube-api-access-mqk5q\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.624440 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.624449 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.627330 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "ec5c09bc-f98c-4587-b4e3-ec9269c04a71" (UID: "ec5c09bc-f98c-4587-b4e3-ec9269c04a71"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.681469 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.755828 5017 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec5c09bc-f98c-4587-b4e3-ec9269c04a71-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.755890 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.767672 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": read tcp 10.217.0.2:38570->10.217.0.167:8776: read: connection reset by peer" Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.940487 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.941140 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="ceilometer-central-agent" containerID="cri-o://527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f" gracePeriod=30 Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.941502 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="proxy-httpd" containerID="cri-o://61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3" gracePeriod=30 Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.941658 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="sg-core" containerID="cri-o://be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4" gracePeriod=30 Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.941718 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="ceilometer-notification-agent" containerID="cri-o://75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2" gracePeriod=30 Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.965468 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.965766 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a6ec780f-f6cc-4d8d-be76-f517dff0673c" containerName="kube-state-metrics" containerID="cri-o://1ab253ce158826a8eb8853f59892c8ebab6a5018fa1efc6e8ae6b7c7d6f3c586" gracePeriod=30 Jan 29 06:57:21 crc kubenswrapper[5017]: I0129 06:57:21.997186 5017 scope.go:117] "RemoveContainer" containerID="2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.009543 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.028037 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.033242 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.052184 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.073525 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.073771 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615b2757-5eab-4454-95da-663755846932-operator-scripts\") pod \"615b2757-5eab-4454-95da-663755846932\" (UID: \"615b2757-5eab-4454-95da-663755846932\") " Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.073928 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlgtf\" (UniqueName: \"kubernetes.io/projected/615b2757-5eab-4454-95da-663755846932-kube-api-access-jlgtf\") pod \"615b2757-5eab-4454-95da-663755846932\" (UID: \"615b2757-5eab-4454-95da-663755846932\") " Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.074681 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615b2757-5eab-4454-95da-663755846932-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "615b2757-5eab-4454-95da-663755846932" (UID: "615b2757-5eab-4454-95da-663755846932"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.075937 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615b2757-5eab-4454-95da-663755846932-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.098205 5017 scope.go:117] "RemoveContainer" containerID="2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.104386 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6\": container with ID starting with 2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6 not found: ID does not exist" containerID="2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.104453 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6"} err="failed to get container status \"2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6\": rpc error: code = NotFound desc = could not find container \"2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6\": container with ID starting with 2210020dfa4716854368968b348fbd02597baa1b8afac742e90bc9b808a72ae6 not found: ID does not exist" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.104500 5017 scope.go:117] "RemoveContainer" containerID="0762eb515121f428ad670c99dbbc9df148f038b1dfcf835e5b82100fdb4c0a75" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.120627 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7be5-account-create-update-4lc27"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.143760 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.144827 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="cc46a149-0256-4061-9e32-936b2ec12588" containerName="memcached" containerID="cri-o://7c791894b1734b0ef6f635f2bdcbd5ede8f7115df23c5068ed2fb5212e72b15f" gracePeriod=30 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.164050 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7be5-account-create-update-4lc27"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.167707 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615b2757-5eab-4454-95da-663755846932-kube-api-access-jlgtf" (OuterVolumeSpecName: "kube-api-access-jlgtf") pod "615b2757-5eab-4454-95da-663755846932" (UID: "615b2757-5eab-4454-95da-663755846932"). InnerVolumeSpecName "kube-api-access-jlgtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.180880 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlgtf\" (UniqueName: \"kubernetes.io/projected/615b2757-5eab-4454-95da-663755846932-kube-api-access-jlgtf\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.182522 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7be5-account-create-update-qzv25"] Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.183185 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" containerName="mysql-bootstrap" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.183258 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" containerName="mysql-bootstrap" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.183344 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.183392 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.183451 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" containerName="galera" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.183505 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" containerName="galera" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.183557 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572c6985-85a2-4a6d-8581-75b8c6b87322" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.183602 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="572c6985-85a2-4a6d-8581-75b8c6b87322" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.183658 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerName="ovsdbserver-nb" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.183708 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerName="ovsdbserver-nb" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.183757 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73788076-4208-4f0f-8c66-95ef1bfb28b6" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.183803 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="73788076-4208-4f0f-8c66-95ef1bfb28b6" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.183854 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.183900 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.183967 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" containerName="ovsdbserver-sb" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.184028 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" containerName="ovsdbserver-sb" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.184082 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" containerName="ovn-northd" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.184126 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" containerName="ovn-northd" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.184195 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c653a323-c9c2-42f2-a2af-125828234475" containerName="init" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.184250 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c653a323-c9c2-42f2-a2af-125828234475" containerName="init" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.184302 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c653a323-c9c2-42f2-a2af-125828234475" containerName="dnsmasq-dns" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.184353 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c653a323-c9c2-42f2-a2af-125828234475" containerName="dnsmasq-dns" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.184410 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-server" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.184457 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-server" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.184509 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-httpd" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.184554 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-httpd" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.184600 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.184650 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.184874 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c653a323-c9c2-42f2-a2af-125828234475" containerName="dnsmasq-dns" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.184938 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" containerName="ovn-northd" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185017 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="572c6985-85a2-4a6d-8581-75b8c6b87322" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185068 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" containerName="galera" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185125 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185178 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="73788076-4208-4f0f-8c66-95ef1bfb28b6" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185240 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerName="ovsdbserver-nb" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185304 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-server" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185356 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0801349-3235-495b-9747-8ce025aad149" containerName="proxy-httpd" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185405 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185451 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" containerName="openstack-network-exporter" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.185501 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" containerName="ovsdbserver-sb" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.186538 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.190722 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.192776 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-t4552"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.203732 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7be5-account-create-update-qzv25"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.230634 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-np6m4"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.232402 5017 scope.go:117] "RemoveContainer" containerID="541eef9ed8a601fb50010147d0b92594e48943cf7dfddc1493041868a74ebb85" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.284358 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-np6m4"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.313195 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-74d8b8b54b-w68vj"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.313497 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-74d8b8b54b-w68vj" podUID="55d2d70d-8578-47fc-a3a7-df7694c3f2a3" containerName="keystone-api" containerID="cri-o://7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5" gracePeriod=30 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.373098 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02965a93-9a4c-4118-a030-0271f53a61a1" path="/var/lib/kubelet/pods/02965a93-9a4c-4118-a030-0271f53a61a1/volumes" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.374826 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c15cf8-f386-428a-a94a-c33598b182a9" path="/var/lib/kubelet/pods/08c15cf8-f386-428a-a94a-c33598b182a9/volumes" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.375494 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572c6985-85a2-4a6d-8581-75b8c6b87322" path="/var/lib/kubelet/pods/572c6985-85a2-4a6d-8581-75b8c6b87322/volumes" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.380059 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a02d03-f3a8-4193-ba1d-623ecaa62fe9" path="/var/lib/kubelet/pods/58a02d03-f3a8-4193-ba1d-623ecaa62fe9/volumes" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.380667 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73788076-4208-4f0f-8c66-95ef1bfb28b6" path="/var/lib/kubelet/pods/73788076-4208-4f0f-8c66-95ef1bfb28b6/volumes" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.384978 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcb9744-2ad4-4c60-a132-0b9769b6b97a" path="/var/lib/kubelet/pods/7fcb9744-2ad4-4c60-a132-0b9769b6b97a/volumes" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.385652 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0801349-3235-495b-9747-8ce025aad149" path="/var/lib/kubelet/pods/a0801349-3235-495b-9747-8ce025aad149/volumes" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.386172 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkqtq\" (UniqueName: \"kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq\") pod \"keystone-7be5-account-create-update-qzv25\" (UID: \"a341d6ae-870f-4453-a804-1c0b4b43ce6f\") " pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.386343 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts\") pod \"keystone-7be5-account-create-update-qzv25\" (UID: \"a341d6ae-870f-4453-a804-1c0b4b43ce6f\") " pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.390059 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e28709-36ce-4df4-8ec9-2ac9458b87da" path="/var/lib/kubelet/pods/b1e28709-36ce-4df4-8ec9-2ac9458b87da/volumes" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.390847 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec5c09bc-f98c-4587-b4e3-ec9269c04a71" path="/var/lib/kubelet/pods/ec5c09bc-f98c-4587-b4e3-ec9269c04a71/volumes" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.393577 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-t4552"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.393614 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7be5-account-create-update-qzv25"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.393641 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-89zdg"] Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.394047 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-nkqtq operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-7be5-account-create-update-qzv25" podUID="a341d6ae-870f-4453-a804-1c0b4b43ce6f" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.412009 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.465041 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-89zdg"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.472647 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4qvq2"] Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.488190 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts\") pod \"keystone-7be5-account-create-update-qzv25\" (UID: \"a341d6ae-870f-4453-a804-1c0b4b43ce6f\") " pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.488707 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkqtq\" (UniqueName: \"kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq\") pod \"keystone-7be5-account-create-update-qzv25\" (UID: \"a341d6ae-870f-4453-a804-1c0b4b43ce6f\") " pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.495503 5017 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.507167 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts podName:a341d6ae-870f-4453-a804-1c0b4b43ce6f nodeName:}" failed. No retries permitted until 2026-01-29 06:57:23.007028892 +0000 UTC m=+1329.381476502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts") pod "keystone-7be5-account-create-update-qzv25" (UID: "a341d6ae-870f-4453-a804-1c0b4b43ce6f") : configmap "openstack-scripts" not found Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.523763 5017 projected.go:194] Error preparing data for projected volume kube-api-access-nkqtq for pod openstack/keystone-7be5-account-create-update-qzv25: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.523850 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq podName:a341d6ae-870f-4453-a804-1c0b4b43ce6f nodeName:}" failed. No retries permitted until 2026-01-29 06:57:23.023831228 +0000 UTC m=+1329.398278838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nkqtq" (UniqueName: "kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq") pod "keystone-7be5-account-create-update-qzv25" (UID: "a341d6ae-870f-4453-a804-1c0b4b43ce6f") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.587228 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fd15-account-create-update-q6477" event={"ID":"0adabddf-74aa-416a-afef-b24b39897f9c","Type":"ContainerDied","Data":"16f4897fd47b24991b802f5e64920e0add3f24ce843a349bb27b3395ef835a27"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.587694 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16f4897fd47b24991b802f5e64920e0add3f24ce843a349bb27b3395ef835a27" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.589678 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0fca-account-create-update-7k6jq" event={"ID":"c55eb047-3255-4a79-8e32-dfb786de8794","Type":"ContainerDied","Data":"33349de7a8b737bd350506c887b12f23cd1a75b09ffd41e1b5258dfed92aa034"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.589711 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33349de7a8b737bd350506c887b12f23cd1a75b09ffd41e1b5258dfed92aa034" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.598566 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-381c-account-create-update-ngzmx" event={"ID":"66475828-326a-4b57-baea-e209e519d639","Type":"ContainerDied","Data":"c9d887916964ab38e1fa5ca2691cae658c46f119a5e395d629a48044f1949cca"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.598649 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9d887916964ab38e1fa5ca2691cae658c46f119a5e395d629a48044f1949cca" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.604043 5017 generic.go:334] "Generic (PLEG): container finished" podID="71f6aede-754b-476f-8082-78f0e50b6a39" containerID="1b90244a79764c9e3b9a5b69c14c39546fc533d032150453b20e901a8805b3fe" exitCode=0 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.604120 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71f6aede-754b-476f-8082-78f0e50b6a39","Type":"ContainerDied","Data":"1b90244a79764c9e3b9a5b69c14c39546fc533d032150453b20e901a8805b3fe"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.610411 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-01d7-account-create-update-n9thm" event={"ID":"615b2757-5eab-4454-95da-663755846932","Type":"ContainerDied","Data":"e13ee67e40b9fceb0a185a9928bd88ba54910f2652e08c3811cc313f02441060"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.610523 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-01d7-account-create-update-n9thm" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.617527 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2d72-account-create-update-hj9h9" event={"ID":"7133c436-5656-4d57-aca3-64e9542ef299","Type":"ContainerDied","Data":"83121bc47d884a3cc20a011c66bb0cad495ad035cdfccd1db485e9a2ec5eefdc"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.617587 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83121bc47d884a3cc20a011c66bb0cad495ad035cdfccd1db485e9a2ec5eefdc" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.635632 5017 generic.go:334] "Generic (PLEG): container finished" podID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerID="27c14ee221c2ba154a659bac681ce15f66d62c55c0cd0468426e11a64f23d5e9" exitCode=0 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.635723 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9c69fc6f-43e9-4fe5-b964-8db89e6ab354","Type":"ContainerDied","Data":"27c14ee221c2ba154a659bac681ce15f66d62c55c0cd0468426e11a64f23d5e9"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.645739 5017 generic.go:334] "Generic (PLEG): container finished" podID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" containerID="5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a" exitCode=1 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.645776 5017 generic.go:334] "Generic (PLEG): container finished" podID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" containerID="c26ced751a74c883105fc37909d9ee54edeff58bd72680f5e7e5b84c045baf04" exitCode=1 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.645834 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4qvq2" event={"ID":"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2","Type":"ContainerDied","Data":"5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.645871 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4qvq2" event={"ID":"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2","Type":"ContainerDied","Data":"c26ced751a74c883105fc37909d9ee54edeff58bd72680f5e7e5b84c045baf04"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.645895 5017 scope.go:117] "RemoveContainer" containerID="5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.646589 5017 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-4qvq2" secret="" err="secret \"galera-openstack-dockercfg-nkr49\" not found" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.646627 5017 scope.go:117] "RemoveContainer" containerID="c26ced751a74c883105fc37909d9ee54edeff58bd72680f5e7e5b84c045baf04" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.647013 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-4qvq2_openstack(3bf19cd1-b93c-449d-ba04-7fecd2ab65e2)\"" pod="openstack/root-account-create-update-4qvq2" podUID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.659456 5017 generic.go:334] "Generic (PLEG): container finished" podID="a6ec780f-f6cc-4d8d-be76-f517dff0673c" containerID="1ab253ce158826a8eb8853f59892c8ebab6a5018fa1efc6e8ae6b7c7d6f3c586" exitCode=2 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.659556 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a6ec780f-f6cc-4d8d-be76-f517dff0673c","Type":"ContainerDied","Data":"1ab253ce158826a8eb8853f59892c8ebab6a5018fa1efc6e8ae6b7c7d6f3c586"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.664588 5017 generic.go:334] "Generic (PLEG): container finished" podID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerID="0c316a09cd764cd0d7717d88d306f363aee8a406a7aab49e4974af5176af2934" exitCode=0 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.664714 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df7814f-338e-40fb-95aa-f93dfa8307d6","Type":"ContainerDied","Data":"0c316a09cd764cd0d7717d88d306f363aee8a406a7aab49e4974af5176af2934"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.683866 5017 generic.go:334] "Generic (PLEG): container finished" podID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerID="dbfb5839d0de6937d94e1b06808176fec9fce89e3d52e262a3d51db47ee776af" exitCode=0 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.683966 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e41c27f8-0c27-4e3d-83b1-62a61abb4faf","Type":"ContainerDied","Data":"dbfb5839d0de6937d94e1b06808176fec9fce89e3d52e262a3d51db47ee776af"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.703321 5017 generic.go:334] "Generic (PLEG): container finished" podID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerID="443395d71d852c3ec070ffebcf4c6e95bc2745cdc77bf998d3b62968c01056ef" exitCode=0 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.703479 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c4c6dc8-5jbvh" event={"ID":"dc01ff67-baeb-47d1-90f5-9cff65c9dffa","Type":"ContainerDied","Data":"443395d71d852c3ec070ffebcf4c6e95bc2745cdc77bf998d3b62968c01056ef"} Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.708012 5017 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.708103 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts podName:3bf19cd1-b93c-449d-ba04-7fecd2ab65e2 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:23.208085992 +0000 UTC m=+1329.582533602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts") pod "root-account-create-update-4qvq2" (UID: "3bf19cd1-b93c-449d-ba04-7fecd2ab65e2") : configmap "openstack-scripts" not found Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.729196 5017 generic.go:334] "Generic (PLEG): container finished" podID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerID="be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4" exitCode=2 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.729288 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.729804 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerDied","Data":"be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4"} Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.767276 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9af88cca-e43b-483d-beae-d6a56940aff7" containerName="galera" containerID="cri-o://ec537ea4d90113835bbfd7b41bd980ad15b540d9718e92b22b059584ee668478" gracePeriod=30 Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.778160 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": dial tcp 10.217.0.206:8775: connect: connection refused" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.778391 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": dial tcp 10.217.0.206:8775: connect: connection refused" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.916378 5017 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.916470 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts podName:0adabddf-74aa-416a-afef-b24b39897f9c nodeName:}" failed. No retries permitted until 2026-01-29 06:57:26.916454543 +0000 UTC m=+1333.290902143 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts") pod "nova-cell1-fd15-account-create-update-q6477" (UID: "0adabddf-74aa-416a-afef-b24b39897f9c") : configmap "openstack-cell1-scripts" not found Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.980222 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.981921 5017 scope.go:117] "RemoveContainer" containerID="5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a" Jan 29 06:57:22 crc kubenswrapper[5017]: E0129 06:57:22.982395 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a\": container with ID starting with 5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a not found: ID does not exist" containerID="5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.982440 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a"} err="failed to get container status \"5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a\": rpc error: code = NotFound desc = could not find container \"5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a\": container with ID starting with 5d57b0f86290389fbf186595a63f56257a3df02104345fbb791e8a5ce9bceb7a not found: ID does not exist" Jan 29 06:57:22 crc kubenswrapper[5017]: I0129 06:57:22.985649 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.009216 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.009438 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.017372 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66475828-326a-4b57-baea-e209e519d639-operator-scripts\") pod \"66475828-326a-4b57-baea-e209e519d639\" (UID: \"66475828-326a-4b57-baea-e209e519d639\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.017422 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24xhj\" (UniqueName: \"kubernetes.io/projected/7133c436-5656-4d57-aca3-64e9542ef299-kube-api-access-24xhj\") pod \"7133c436-5656-4d57-aca3-64e9542ef299\" (UID: \"7133c436-5656-4d57-aca3-64e9542ef299\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.017483 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkrn9\" (UniqueName: \"kubernetes.io/projected/c55eb047-3255-4a79-8e32-dfb786de8794-kube-api-access-pkrn9\") pod \"c55eb047-3255-4a79-8e32-dfb786de8794\" (UID: \"c55eb047-3255-4a79-8e32-dfb786de8794\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.017534 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8db4\" (UniqueName: \"kubernetes.io/projected/66475828-326a-4b57-baea-e209e519d639-kube-api-access-g8db4\") pod \"66475828-326a-4b57-baea-e209e519d639\" (UID: \"66475828-326a-4b57-baea-e209e519d639\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.017625 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9v8g\" (UniqueName: \"kubernetes.io/projected/0adabddf-74aa-416a-afef-b24b39897f9c-kube-api-access-p9v8g\") pod \"0adabddf-74aa-416a-afef-b24b39897f9c\" (UID: \"0adabddf-74aa-416a-afef-b24b39897f9c\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.017815 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts\") pod \"0adabddf-74aa-416a-afef-b24b39897f9c\" (UID: \"0adabddf-74aa-416a-afef-b24b39897f9c\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.017897 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7133c436-5656-4d57-aca3-64e9542ef299-operator-scripts\") pod \"7133c436-5656-4d57-aca3-64e9542ef299\" (UID: \"7133c436-5656-4d57-aca3-64e9542ef299\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.019328 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c55eb047-3255-4a79-8e32-dfb786de8794-operator-scripts\") pod \"c55eb047-3255-4a79-8e32-dfb786de8794\" (UID: \"c55eb047-3255-4a79-8e32-dfb786de8794\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.019749 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts\") pod \"keystone-7be5-account-create-update-qzv25\" (UID: \"a341d6ae-870f-4453-a804-1c0b4b43ce6f\") " pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.020018 5017 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.020079 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts podName:a341d6ae-870f-4453-a804-1c0b4b43ce6f nodeName:}" failed. No retries permitted until 2026-01-29 06:57:24.02005914 +0000 UTC m=+1330.394506750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts") pod "keystone-7be5-account-create-update-qzv25" (UID: "a341d6ae-870f-4453-a804-1c0b4b43ce6f") : configmap "openstack-scripts" not found Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.021063 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66475828-326a-4b57-baea-e209e519d639-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66475828-326a-4b57-baea-e209e519d639" (UID: "66475828-326a-4b57-baea-e209e519d639"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.023941 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-01d7-account-create-update-n9thm"] Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.024434 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c55eb047-3255-4a79-8e32-dfb786de8794-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c55eb047-3255-4a79-8e32-dfb786de8794" (UID: "c55eb047-3255-4a79-8e32-dfb786de8794"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.026008 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0adabddf-74aa-416a-afef-b24b39897f9c" (UID: "0adabddf-74aa-416a-afef-b24b39897f9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.026458 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7133c436-5656-4d57-aca3-64e9542ef299-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7133c436-5656-4d57-aca3-64e9542ef299" (UID: "7133c436-5656-4d57-aca3-64e9542ef299"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.031613 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.032454 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-01d7-account-create-update-n9thm"] Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.036285 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55eb047-3255-4a79-8e32-dfb786de8794-kube-api-access-pkrn9" (OuterVolumeSpecName: "kube-api-access-pkrn9") pod "c55eb047-3255-4a79-8e32-dfb786de8794" (UID: "c55eb047-3255-4a79-8e32-dfb786de8794"). InnerVolumeSpecName "kube-api-access-pkrn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.036517 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66475828-326a-4b57-baea-e209e519d639-kube-api-access-g8db4" (OuterVolumeSpecName: "kube-api-access-g8db4") pod "66475828-326a-4b57-baea-e209e519d639" (UID: "66475828-326a-4b57-baea-e209e519d639"). InnerVolumeSpecName "kube-api-access-g8db4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.036605 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0adabddf-74aa-416a-afef-b24b39897f9c-kube-api-access-p9v8g" (OuterVolumeSpecName: "kube-api-access-p9v8g") pod "0adabddf-74aa-416a-afef-b24b39897f9c" (UID: "0adabddf-74aa-416a-afef-b24b39897f9c"). InnerVolumeSpecName "kube-api-access-p9v8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.036297 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.038081 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7133c436-5656-4d57-aca3-64e9542ef299-kube-api-access-24xhj" (OuterVolumeSpecName: "kube-api-access-24xhj") pod "7133c436-5656-4d57-aca3-64e9542ef299" (UID: "7133c436-5656-4d57-aca3-64e9542ef299"). InnerVolumeSpecName "kube-api-access-24xhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.068286 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.071909 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.121917 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxz9k\" (UniqueName: \"kubernetes.io/projected/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-kube-api-access-dxz9k\") pod \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122050 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-internal-tls-certs\") pod \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122091 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-scripts\") pod \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122139 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data-custom\") pod \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122180 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data\") pod \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122263 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-scripts\") pod \"71f6aede-754b-476f-8082-78f0e50b6a39\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122291 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data\") pod \"71f6aede-754b-476f-8082-78f0e50b6a39\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122314 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-scripts\") pod \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122380 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-internal-tls-certs\") pod \"71f6aede-754b-476f-8082-78f0e50b6a39\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122429 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71f6aede-754b-476f-8082-78f0e50b6a39-logs\") pod \"71f6aede-754b-476f-8082-78f0e50b6a39\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122454 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-combined-ca-bundle\") pod \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122494 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-public-tls-certs\") pod \"71f6aede-754b-476f-8082-78f0e50b6a39\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122541 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhkzg\" (UniqueName: \"kubernetes.io/projected/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-kube-api-access-vhkzg\") pod \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122569 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-combined-ca-bundle\") pod \"71f6aede-754b-476f-8082-78f0e50b6a39\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122644 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbzbp\" (UniqueName: \"kubernetes.io/projected/71f6aede-754b-476f-8082-78f0e50b6a39-kube-api-access-mbzbp\") pod \"71f6aede-754b-476f-8082-78f0e50b6a39\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122671 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71f6aede-754b-476f-8082-78f0e50b6a39-etc-machine-id\") pod \"71f6aede-754b-476f-8082-78f0e50b6a39\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122703 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-logs\") pod \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122752 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data-custom\") pod \"71f6aede-754b-476f-8082-78f0e50b6a39\" (UID: \"71f6aede-754b-476f-8082-78f0e50b6a39\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122784 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-public-tls-certs\") pod \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122826 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-etc-machine-id\") pod \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122865 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-config-data\") pod \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\" (UID: \"dc01ff67-baeb-47d1-90f5-9cff65c9dffa\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.122914 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-combined-ca-bundle\") pod \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\" (UID: \"9c69fc6f-43e9-4fe5-b964-8db89e6ab354\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.126247 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkqtq\" (UniqueName: \"kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq\") pod \"keystone-7be5-account-create-update-qzv25\" (UID: \"a341d6ae-870f-4453-a804-1c0b4b43ce6f\") " pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.126400 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7133c436-5656-4d57-aca3-64e9542ef299-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.126424 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c55eb047-3255-4a79-8e32-dfb786de8794-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.126437 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66475828-326a-4b57-baea-e209e519d639-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.126453 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24xhj\" (UniqueName: \"kubernetes.io/projected/7133c436-5656-4d57-aca3-64e9542ef299-kube-api-access-24xhj\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.126473 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkrn9\" (UniqueName: \"kubernetes.io/projected/c55eb047-3255-4a79-8e32-dfb786de8794-kube-api-access-pkrn9\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.126490 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8db4\" (UniqueName: \"kubernetes.io/projected/66475828-326a-4b57-baea-e209e519d639-kube-api-access-g8db4\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.126534 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9v8g\" (UniqueName: \"kubernetes.io/projected/0adabddf-74aa-416a-afef-b24b39897f9c-kube-api-access-p9v8g\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.126546 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0adabddf-74aa-416a-afef-b24b39897f9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.128032 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71f6aede-754b-476f-8082-78f0e50b6a39-logs" (OuterVolumeSpecName: "logs") pod "71f6aede-754b-476f-8082-78f0e50b6a39" (UID: "71f6aede-754b-476f-8082-78f0e50b6a39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.128259 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-logs" (OuterVolumeSpecName: "logs") pod "dc01ff67-baeb-47d1-90f5-9cff65c9dffa" (UID: "dc01ff67-baeb-47d1-90f5-9cff65c9dffa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.130052 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-kube-api-access-dxz9k" (OuterVolumeSpecName: "kube-api-access-dxz9k") pod "9c69fc6f-43e9-4fe5-b964-8db89e6ab354" (UID: "9c69fc6f-43e9-4fe5-b964-8db89e6ab354"). InnerVolumeSpecName "kube-api-access-dxz9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.130471 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9c69fc6f-43e9-4fe5-b964-8db89e6ab354" (UID: "9c69fc6f-43e9-4fe5-b964-8db89e6ab354"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.136821 5017 projected.go:194] Error preparing data for projected volume kube-api-access-nkqtq for pod openstack/keystone-7be5-account-create-update-qzv25: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.136900 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq podName:a341d6ae-870f-4453-a804-1c0b4b43ce6f nodeName:}" failed. No retries permitted until 2026-01-29 06:57:24.136876923 +0000 UTC m=+1330.511324723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nkqtq" (UniqueName: "kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq") pod "keystone-7be5-account-create-update-qzv25" (UID: "a341d6ae-870f-4453-a804-1c0b4b43ce6f") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.144784 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "71f6aede-754b-476f-8082-78f0e50b6a39" (UID: "71f6aede-754b-476f-8082-78f0e50b6a39"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.145839 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71f6aede-754b-476f-8082-78f0e50b6a39-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "71f6aede-754b-476f-8082-78f0e50b6a39" (UID: "71f6aede-754b-476f-8082-78f0e50b6a39"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.151342 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9c69fc6f-43e9-4fe5-b964-8db89e6ab354" (UID: "9c69fc6f-43e9-4fe5-b964-8db89e6ab354"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.151371 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-scripts" (OuterVolumeSpecName: "scripts") pod "71f6aede-754b-476f-8082-78f0e50b6a39" (UID: "71f6aede-754b-476f-8082-78f0e50b6a39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.165673 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-scripts" (OuterVolumeSpecName: "scripts") pod "dc01ff67-baeb-47d1-90f5-9cff65c9dffa" (UID: "dc01ff67-baeb-47d1-90f5-9cff65c9dffa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.170224 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-scripts" (OuterVolumeSpecName: "scripts") pod "9c69fc6f-43e9-4fe5-b964-8db89e6ab354" (UID: "9c69fc6f-43e9-4fe5-b964-8db89e6ab354"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.170513 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.189326 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-kube-api-access-vhkzg" (OuterVolumeSpecName: "kube-api-access-vhkzg") pod "dc01ff67-baeb-47d1-90f5-9cff65c9dffa" (UID: "dc01ff67-baeb-47d1-90f5-9cff65c9dffa"). InnerVolumeSpecName "kube-api-access-vhkzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.198634 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f6aede-754b-476f-8082-78f0e50b6a39-kube-api-access-mbzbp" (OuterVolumeSpecName: "kube-api-access-mbzbp") pod "71f6aede-754b-476f-8082-78f0e50b6a39" (UID: "71f6aede-754b-476f-8082-78f0e50b6a39"). InnerVolumeSpecName "kube-api-access-mbzbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.229517 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.229614 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-scripts\") pod \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.229712 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6g5c\" (UniqueName: \"kubernetes.io/projected/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-kube-api-access-l6g5c\") pod \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.229847 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-config-data\") pod \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.229940 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-httpd-run\") pod \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.230079 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-logs\") pod \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.230120 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-internal-tls-certs\") pod \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.230182 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-combined-ca-bundle\") pod \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\" (UID: \"e41c27f8-0c27-4e3d-83b1-62a61abb4faf\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.232518 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-logs" (OuterVolumeSpecName: "logs") pod "e41c27f8-0c27-4e3d-83b1-62a61abb4faf" (UID: "e41c27f8-0c27-4e3d-83b1-62a61abb4faf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.232858 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e41c27f8-0c27-4e3d-83b1-62a61abb4faf" (UID: "e41c27f8-0c27-4e3d-83b1-62a61abb4faf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.239305 5017 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.239401 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts podName:3bf19cd1-b93c-449d-ba04-7fecd2ab65e2 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:24.239374292 +0000 UTC m=+1330.613821962 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts") pod "root-account-create-update-4qvq2" (UID: "3bf19cd1-b93c-449d-ba04-7fecd2ab65e2") : configmap "openstack-scripts" not found Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239768 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239791 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxz9k\" (UniqueName: \"kubernetes.io/projected/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-kube-api-access-dxz9k\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239806 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239819 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239829 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239837 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239849 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71f6aede-754b-476f-8082-78f0e50b6a39-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239858 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhkzg\" (UniqueName: \"kubernetes.io/projected/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-kube-api-access-vhkzg\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239871 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbzbp\" (UniqueName: \"kubernetes.io/projected/71f6aede-754b-476f-8082-78f0e50b6a39-kube-api-access-mbzbp\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239881 5017 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71f6aede-754b-476f-8082-78f0e50b6a39-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239890 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239899 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239908 5017 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.239917 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.244235 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "e41c27f8-0c27-4e3d-83b1-62a61abb4faf" (UID: "e41c27f8-0c27-4e3d-83b1-62a61abb4faf"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.244270 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-kube-api-access-l6g5c" (OuterVolumeSpecName: "kube-api-access-l6g5c") pod "e41c27f8-0c27-4e3d-83b1-62a61abb4faf" (UID: "e41c27f8-0c27-4e3d-83b1-62a61abb4faf"). InnerVolumeSpecName "kube-api-access-l6g5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.252581 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data" (OuterVolumeSpecName: "config-data") pod "71f6aede-754b-476f-8082-78f0e50b6a39" (UID: "71f6aede-754b-476f-8082-78f0e50b6a39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.252650 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "71f6aede-754b-476f-8082-78f0e50b6a39" (UID: "71f6aede-754b-476f-8082-78f0e50b6a39"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.254878 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c69fc6f-43e9-4fe5-b964-8db89e6ab354" (UID: "9c69fc6f-43e9-4fe5-b964-8db89e6ab354"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.260705 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-scripts" (OuterVolumeSpecName: "scripts") pod "e41c27f8-0c27-4e3d-83b1-62a61abb4faf" (UID: "e41c27f8-0c27-4e3d-83b1-62a61abb4faf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.265350 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc01ff67-baeb-47d1-90f5-9cff65c9dffa" (UID: "dc01ff67-baeb-47d1-90f5-9cff65c9dffa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.281494 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e41c27f8-0c27-4e3d-83b1-62a61abb4faf" (UID: "e41c27f8-0c27-4e3d-83b1-62a61abb4faf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.283755 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "71f6aede-754b-476f-8082-78f0e50b6a39" (UID: "71f6aede-754b-476f-8082-78f0e50b6a39"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.321186 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-config-data" (OuterVolumeSpecName: "config-data") pod "dc01ff67-baeb-47d1-90f5-9cff65c9dffa" (UID: "dc01ff67-baeb-47d1-90f5-9cff65c9dffa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350129 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350203 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350219 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350234 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350252 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350265 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6g5c\" (UniqueName: \"kubernetes.io/projected/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-kube-api-access-l6g5c\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350279 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350290 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350305 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.350319 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.362367 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-config-data" (OuterVolumeSpecName: "config-data") pod "e41c27f8-0c27-4e3d-83b1-62a61abb4faf" (UID: "e41c27f8-0c27-4e3d-83b1-62a61abb4faf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.368516 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data" (OuterVolumeSpecName: "config-data") pod "9c69fc6f-43e9-4fe5-b964-8db89e6ab354" (UID: "9c69fc6f-43e9-4fe5-b964-8db89e6ab354"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.378595 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71f6aede-754b-476f-8082-78f0e50b6a39" (UID: "71f6aede-754b-476f-8082-78f0e50b6a39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.389628 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.391819 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.201:3000/\": dial tcp 10.217.0.201:3000: connect: connection refused" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.401785 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc01ff67-baeb-47d1-90f5-9cff65c9dffa" (UID: "dc01ff67-baeb-47d1-90f5-9cff65c9dffa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.403565 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e41c27f8-0c27-4e3d-83b1-62a61abb4faf" (UID: "e41c27f8-0c27-4e3d-83b1-62a61abb4faf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.438346 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-544777f6b8-l4dw8" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:38620->10.217.0.162:9311: read: connection reset by peer" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.438670 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-544777f6b8-l4dw8" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:38634->10.217.0.162:9311: read: connection reset by peer" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.447254 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.451862 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.451895 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c69fc6f-43e9-4fe5-b964-8db89e6ab354-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.451905 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.451919 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71f6aede-754b-476f-8082-78f0e50b6a39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.451930 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c27f8-0c27-4e3d-83b1-62a61abb4faf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.451943 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.452565 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dc01ff67-baeb-47d1-90f5-9cff65c9dffa" (UID: "dc01ff67-baeb-47d1-90f5-9cff65c9dffa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.553389 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-combined-ca-bundle\") pod \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.553531 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-config\") pod \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.553575 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxcld\" (UniqueName: \"kubernetes.io/projected/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-api-access-cxcld\") pod \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.553722 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-certs\") pod \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\" (UID: \"a6ec780f-f6cc-4d8d-be76-f517dff0673c\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.554506 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01ff67-baeb-47d1-90f5-9cff65c9dffa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.559085 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-api-access-cxcld" (OuterVolumeSpecName: "kube-api-access-cxcld") pod "a6ec780f-f6cc-4d8d-be76-f517dff0673c" (UID: "a6ec780f-f6cc-4d8d-be76-f517dff0673c"). InnerVolumeSpecName "kube-api-access-cxcld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.592162 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6ec780f-f6cc-4d8d-be76-f517dff0673c" (UID: "a6ec780f-f6cc-4d8d-be76-f517dff0673c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.623845 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.626574 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.628778 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.628818 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.631245 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.633444 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.635636 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:23 crc kubenswrapper[5017]: E0129 06:57:23.635700 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.645992 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "a6ec780f-f6cc-4d8d-be76-f517dff0673c" (UID: "a6ec780f-f6cc-4d8d-be76-f517dff0673c"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.653456 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "a6ec780f-f6cc-4d8d-be76-f517dff0673c" (UID: "a6ec780f-f6cc-4d8d-be76-f517dff0673c"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.657200 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.657230 5017 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.657249 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxcld\" (UniqueName: \"kubernetes.io/projected/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-api-access-cxcld\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.657268 5017 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ec780f-f6cc-4d8d-be76-f517dff0673c-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.766448 5017 generic.go:334] "Generic (PLEG): container finished" podID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerID="61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3" exitCode=0 Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.767128 5017 generic.go:334] "Generic (PLEG): container finished" podID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerID="527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f" exitCode=0 Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.767059 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerDied","Data":"61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.767221 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerDied","Data":"527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.773574 5017 generic.go:334] "Generic (PLEG): container finished" podID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerID="5c19de805cb364596b5009993949de148a0ae873176b17c0e742ef83f5bf9bd2" exitCode=0 Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.773664 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d94b8e3-f4a6-4fc2-af59-57b33254cd74","Type":"ContainerDied","Data":"5c19de805cb364596b5009993949de148a0ae873176b17c0e742ef83f5bf9bd2"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.773694 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d94b8e3-f4a6-4fc2-af59-57b33254cd74","Type":"ContainerDied","Data":"0c12aa5033bdeaa062e7b70d4a0c30a48c67fef5b076423507f8c68b0d684b32"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.773711 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c12aa5033bdeaa062e7b70d4a0c30a48c67fef5b076423507f8c68b0d684b32" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.783651 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e41c27f8-0c27-4e3d-83b1-62a61abb4faf","Type":"ContainerDied","Data":"8ad921e5528d4c71a7008240c3da3ab58d2f022a1e3bdfbf68d52189fee9984b"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.783715 5017 scope.go:117] "RemoveContainer" containerID="dbfb5839d0de6937d94e1b06808176fec9fce89e3d52e262a3d51db47ee776af" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.783858 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.792774 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9df7814f-338e-40fb-95aa-f93dfa8307d6","Type":"ContainerDied","Data":"4cb2e309cb9d136517ca8e41e3becfde536e7b8da16a8490be5ddf8950013877"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.792845 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb2e309cb9d136517ca8e41e3becfde536e7b8da16a8490be5ddf8950013877" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.799978 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9c69fc6f-43e9-4fe5-b964-8db89e6ab354","Type":"ContainerDied","Data":"09bc6537d1f0bc72902e43e56274c6427b49d249e85072b33a90c4d85051d987"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.800125 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.800464 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rtkrb" podUID="4c57c864-37e8-46b9-b30d-1762f3858984" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.819229 5017 generic.go:334] "Generic (PLEG): container finished" podID="cc46a149-0256-4061-9e32-936b2ec12588" containerID="7c791894b1734b0ef6f635f2bdcbd5ede8f7115df23c5068ed2fb5212e72b15f" exitCode=0 Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.819335 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cc46a149-0256-4061-9e32-936b2ec12588","Type":"ContainerDied","Data":"7c791894b1734b0ef6f635f2bdcbd5ede8f7115df23c5068ed2fb5212e72b15f"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.819442 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cc46a149-0256-4061-9e32-936b2ec12588","Type":"ContainerDied","Data":"6cfd9eabfdf0d74b868de543d9950402606b69f655914f3f12b54ce3ced9f61f"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.819456 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cfd9eabfdf0d74b868de543d9950402606b69f655914f3f12b54ce3ced9f61f" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.834097 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.834378 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a6ec780f-f6cc-4d8d-be76-f517dff0673c","Type":"ContainerDied","Data":"9145350c14d6cd1b56f6bf2fb8df3b0a7d3a34b6258188c515ba10244a89aa8c"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.834439 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.858706 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.875092 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.875299 5017 generic.go:334] "Generic (PLEG): container finished" podID="919074d0-f7a7-4d64-8339-744730688c4f" containerID="290c7cb878f017a867ee8ae761d80813ae152cf14f4cb08011870623faa5a09c" exitCode=0 Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.875345 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-544777f6b8-l4dw8" event={"ID":"919074d0-f7a7-4d64-8339-744730688c4f","Type":"ContainerDied","Data":"290c7cb878f017a867ee8ae761d80813ae152cf14f4cb08011870623faa5a09c"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.875508 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.877185 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.877416 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.879602 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c4c6dc8-5jbvh" event={"ID":"dc01ff67-baeb-47d1-90f5-9cff65c9dffa","Type":"ContainerDied","Data":"a9796bd011e07eae89e477a2e8422d623a06c70465616de74892fac21cb060b7"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.879700 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d6c4c6dc8-5jbvh" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.900474 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rtkrb" podUID="4c57c864-37e8-46b9-b30d-1762f3858984" containerName="ovn-controller" probeResult="failure" output=< Jan 29 06:57:23 crc kubenswrapper[5017]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 29 06:57:23 crc kubenswrapper[5017]: > Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.929935 5017 scope.go:117] "RemoveContainer" containerID="e97d4813efcf24ccacbdbd3a38be06d62a0d548850293f33dfd1414e7cf3dbe7" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.943039 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.945709 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-381c-account-create-update-ngzmx" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.960494 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fd15-account-create-update-q6477" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.961269 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.970581 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.971101 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0fca-account-create-update-7k6jq" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.971614 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71f6aede-754b-476f-8082-78f0e50b6a39","Type":"ContainerDied","Data":"7266fb2e3fc5ecae384df96f6ce26886429801b6b37338710a04f93a864dbe88"} Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.971666 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.971739 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d72-account-create-update-hj9h9" Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.986382 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9df7814f-338e-40fb-95aa-f93dfa8307d6\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.986646 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-logs\") pod \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.986751 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-combined-ca-bundle\") pod \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.986840 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wjdr\" (UniqueName: \"kubernetes.io/projected/9df7814f-338e-40fb-95aa-f93dfa8307d6-kube-api-access-9wjdr\") pod \"9df7814f-338e-40fb-95aa-f93dfa8307d6\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.987086 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkzdk\" (UniqueName: \"kubernetes.io/projected/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-kube-api-access-gkzdk\") pod \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.987372 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data\") pod \"919074d0-f7a7-4d64-8339-744730688c4f\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.987743 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-internal-tls-certs\") pod \"919074d0-f7a7-4d64-8339-744730688c4f\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.992178 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-config-data\") pod \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.992310 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-kolla-config\") pod \"cc46a149-0256-4061-9e32-936b2ec12588\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.992428 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-public-tls-certs\") pod \"9df7814f-338e-40fb-95aa-f93dfa8307d6\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.992519 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/919074d0-f7a7-4d64-8339-744730688c4f-logs\") pod \"919074d0-f7a7-4d64-8339-744730688c4f\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.992673 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-memcached-tls-certs\") pod \"cc46a149-0256-4061-9e32-936b2ec12588\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.995662 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-combined-ca-bundle\") pod \"919074d0-f7a7-4d64-8339-744730688c4f\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.996468 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-scripts\") pod \"9df7814f-338e-40fb-95aa-f93dfa8307d6\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.996644 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlpp9\" (UniqueName: \"kubernetes.io/projected/919074d0-f7a7-4d64-8339-744730688c4f-kube-api-access-hlpp9\") pod \"919074d0-f7a7-4d64-8339-744730688c4f\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.996763 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-combined-ca-bundle\") pod \"cc46a149-0256-4061-9e32-936b2ec12588\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.996852 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-httpd-run\") pod \"9df7814f-338e-40fb-95aa-f93dfa8307d6\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.997053 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-nova-metadata-tls-certs\") pod \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\" (UID: \"8d94b8e3-f4a6-4fc2-af59-57b33254cd74\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.997869 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-public-tls-certs\") pod \"919074d0-f7a7-4d64-8339-744730688c4f\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.998395 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-logs\") pod \"9df7814f-338e-40fb-95aa-f93dfa8307d6\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.998828 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-config-data\") pod \"cc46a149-0256-4061-9e32-936b2ec12588\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.999265 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-config-data\") pod \"9df7814f-338e-40fb-95aa-f93dfa8307d6\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.999388 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-combined-ca-bundle\") pod \"9df7814f-338e-40fb-95aa-f93dfa8307d6\" (UID: \"9df7814f-338e-40fb-95aa-f93dfa8307d6\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.999581 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data-custom\") pod \"919074d0-f7a7-4d64-8339-744730688c4f\" (UID: \"919074d0-f7a7-4d64-8339-744730688c4f\") " Jan 29 06:57:23 crc kubenswrapper[5017]: I0129 06:57:23.999687 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbh96\" (UniqueName: \"kubernetes.io/projected/cc46a149-0256-4061-9e32-936b2ec12588-kube-api-access-tbh96\") pod \"cc46a149-0256-4061-9e32-936b2ec12588\" (UID: \"cc46a149-0256-4061-9e32-936b2ec12588\") " Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:23.992070 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-logs" (OuterVolumeSpecName: "logs") pod "8d94b8e3-f4a6-4fc2-af59-57b33254cd74" (UID: "8d94b8e3-f4a6-4fc2-af59-57b33254cd74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.001932 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-logs" (OuterVolumeSpecName: "logs") pod "9df7814f-338e-40fb-95aa-f93dfa8307d6" (UID: "9df7814f-338e-40fb-95aa-f93dfa8307d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.013878 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9df7814f-338e-40fb-95aa-f93dfa8307d6" (UID: "9df7814f-338e-40fb-95aa-f93dfa8307d6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.014364 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "9df7814f-338e-40fb-95aa-f93dfa8307d6" (UID: "9df7814f-338e-40fb-95aa-f93dfa8307d6"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.014622 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/919074d0-f7a7-4d64-8339-744730688c4f-logs" (OuterVolumeSpecName: "logs") pod "919074d0-f7a7-4d64-8339-744730688c4f" (UID: "919074d0-f7a7-4d64-8339-744730688c4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.018709 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "cc46a149-0256-4061-9e32-936b2ec12588" (UID: "cc46a149-0256-4061-9e32-936b2ec12588"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.018993 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df7814f-338e-40fb-95aa-f93dfa8307d6-kube-api-access-9wjdr" (OuterVolumeSpecName: "kube-api-access-9wjdr") pod "9df7814f-338e-40fb-95aa-f93dfa8307d6" (UID: "9df7814f-338e-40fb-95aa-f93dfa8307d6"). InnerVolumeSpecName "kube-api-access-9wjdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.021245 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-config-data" (OuterVolumeSpecName: "config-data") pod "cc46a149-0256-4061-9e32-936b2ec12588" (UID: "cc46a149-0256-4061-9e32-936b2ec12588"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:23.989804 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.021326 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.021379 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6d6c4c6dc8-5jbvh"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.021393 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6d6c4c6dc8-5jbvh"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.031077 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-kube-api-access-gkzdk" (OuterVolumeSpecName: "kube-api-access-gkzdk") pod "8d94b8e3-f4a6-4fc2-af59-57b33254cd74" (UID: "8d94b8e3-f4a6-4fc2-af59-57b33254cd74"). InnerVolumeSpecName "kube-api-access-gkzdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.044352 5017 scope.go:117] "RemoveContainer" containerID="999837a7b08863c4bad372817a69589db1cc60b86b13c6914e11866e71643157" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.067242 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc46a149-0256-4061-9e32-936b2ec12588-kube-api-access-tbh96" (OuterVolumeSpecName: "kube-api-access-tbh96") pod "cc46a149-0256-4061-9e32-936b2ec12588" (UID: "cc46a149-0256-4061-9e32-936b2ec12588"). InnerVolumeSpecName "kube-api-access-tbh96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.100661 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919074d0-f7a7-4d64-8339-744730688c4f-kube-api-access-hlpp9" (OuterVolumeSpecName: "kube-api-access-hlpp9") pod "919074d0-f7a7-4d64-8339-744730688c4f" (UID: "919074d0-f7a7-4d64-8339-744730688c4f"). InnerVolumeSpecName "kube-api-access-hlpp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.103736 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts\") pod \"keystone-7be5-account-create-update-qzv25\" (UID: \"a341d6ae-870f-4453-a804-1c0b4b43ce6f\") " pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.103942 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/919074d0-f7a7-4d64-8339-744730688c4f-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.103973 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlpp9\" (UniqueName: \"kubernetes.io/projected/919074d0-f7a7-4d64-8339-744730688c4f-kube-api-access-hlpp9\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.103983 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.103992 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7814f-338e-40fb-95aa-f93dfa8307d6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.104001 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.104010 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbh96\" (UniqueName: \"kubernetes.io/projected/cc46a149-0256-4061-9e32-936b2ec12588-kube-api-access-tbh96\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.104035 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.104043 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.104052 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wjdr\" (UniqueName: \"kubernetes.io/projected/9df7814f-338e-40fb-95aa-f93dfa8307d6-kube-api-access-9wjdr\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.104064 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkzdk\" (UniqueName: \"kubernetes.io/projected/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-kube-api-access-gkzdk\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.104072 5017 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc46a149-0256-4061-9e32-936b2ec12588-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.106100 5017 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.106239 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts podName:a341d6ae-870f-4453-a804-1c0b4b43ce6f nodeName:}" failed. No retries permitted until 2026-01-29 06:57:26.106214344 +0000 UTC m=+1332.480661954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts") pod "keystone-7be5-account-create-update-qzv25" (UID: "a341d6ae-870f-4453-a804-1c0b4b43ce6f") : configmap "openstack-scripts" not found Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.124233 5017 scope.go:117] "RemoveContainer" containerID="27c14ee221c2ba154a659bac681ce15f66d62c55c0cd0468426e11a64f23d5e9" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.125311 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "919074d0-f7a7-4d64-8339-744730688c4f" (UID: "919074d0-f7a7-4d64-8339-744730688c4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.129626 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.148033 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.149545 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-scripts" (OuterVolumeSpecName: "scripts") pod "9df7814f-338e-40fb-95aa-f93dfa8307d6" (UID: "9df7814f-338e-40fb-95aa-f93dfa8307d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.185711 5017 scope.go:117] "RemoveContainer" containerID="1ab253ce158826a8eb8853f59892c8ebab6a5018fa1efc6e8ae6b7c7d6f3c586" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.209136 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkqtq\" (UniqueName: \"kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq\") pod \"keystone-7be5-account-create-update-qzv25\" (UID: \"a341d6ae-870f-4453-a804-1c0b4b43ce6f\") " pod="openstack/keystone-7be5-account-create-update-qzv25" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.209554 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.209596 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.213558 5017 projected.go:194] Error preparing data for projected volume kube-api-access-nkqtq for pod openstack/keystone-7be5-account-create-update-qzv25: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.213630 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq podName:a341d6ae-870f-4453-a804-1c0b4b43ce6f nodeName:}" failed. No retries permitted until 2026-01-29 06:57:26.213613334 +0000 UTC m=+1332.588060944 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nkqtq" (UniqueName: "kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq") pod "keystone-7be5-account-create-update-qzv25" (UID: "a341d6ae-870f-4453-a804-1c0b4b43ce6f") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.230913 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2d72-account-create-update-hj9h9"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.253424 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9df7814f-338e-40fb-95aa-f93dfa8307d6" (UID: "9df7814f-338e-40fb-95aa-f93dfa8307d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.257193 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2d72-account-create-update-hj9h9"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.261564 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc46a149-0256-4061-9e32-936b2ec12588" (UID: "cc46a149-0256-4061-9e32-936b2ec12588"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.285396 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d94b8e3-f4a6-4fc2-af59-57b33254cd74" (UID: "8d94b8e3-f4a6-4fc2-af59-57b33254cd74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.285613 5017 scope.go:117] "RemoveContainer" containerID="443395d71d852c3ec070ffebcf4c6e95bc2745cdc77bf998d3b62968c01056ef" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.310236 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.312088 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.312110 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.312122 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.312131 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.312200 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.312466 5017 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.312931 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data podName:d30b013f-453f-4282-8b22-2a5270027828 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:32.312238377 +0000 UTC m=+1338.686685987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data") pod "rabbitmq-cell1-server-0" (UID: "d30b013f-453f-4282-8b22-2a5270027828") : configmap "rabbitmq-cell1-config-data" not found Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.312950 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts podName:3bf19cd1-b93c-449d-ba04-7fecd2ab65e2 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:26.312942534 +0000 UTC m=+1332.687390144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts") pod "root-account-create-update-4qvq2" (UID: "3bf19cd1-b93c-449d-ba04-7fecd2ab65e2") : configmap "openstack-scripts" not found Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.322964 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-config-data" (OuterVolumeSpecName: "config-data") pod "8d94b8e3-f4a6-4fc2-af59-57b33254cd74" (UID: "8d94b8e3-f4a6-4fc2-af59-57b33254cd74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.328768 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.349539 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-config-data" (OuterVolumeSpecName: "config-data") pod "9df7814f-338e-40fb-95aa-f93dfa8307d6" (UID: "9df7814f-338e-40fb-95aa-f93dfa8307d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.350887 5017 scope.go:117] "RemoveContainer" containerID="2eee62f312708ba7438eddc1dabc0de687bb3ae24d41ed266160597a9d245df9" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.351177 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data" (OuterVolumeSpecName: "config-data") pod "919074d0-f7a7-4d64-8339-744730688c4f" (UID: "919074d0-f7a7-4d64-8339-744730688c4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.354415 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="615b2757-5eab-4454-95da-663755846932" path="/var/lib/kubelet/pods/615b2757-5eab-4454-95da-663755846932/volumes" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.356701 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "919074d0-f7a7-4d64-8339-744730688c4f" (UID: "919074d0-f7a7-4d64-8339-744730688c4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.358789 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "919074d0-f7a7-4d64-8339-744730688c4f" (UID: "919074d0-f7a7-4d64-8339-744730688c4f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.359688 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7133c436-5656-4d57-aca3-64e9542ef299" path="/var/lib/kubelet/pods/7133c436-5656-4d57-aca3-64e9542ef299/volumes" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.360394 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" path="/var/lib/kubelet/pods/71f6aede-754b-476f-8082-78f0e50b6a39/volumes" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.367327 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ff38c8-c1d9-48bd-a593-829ed49d4c2d" path="/var/lib/kubelet/pods/91ff38c8-c1d9-48bd-a593-829ed49d4c2d/volumes" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.404999 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" path="/var/lib/kubelet/pods/9c69fc6f-43e9-4fe5-b964-8db89e6ab354/volumes" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.405733 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "919074d0-f7a7-4d64-8339-744730688c4f" (UID: "919074d0-f7a7-4d64-8339-744730688c4f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.407651 5017 scope.go:117] "RemoveContainer" containerID="1b90244a79764c9e3b9a5b69c14c39546fc533d032150453b20e901a8805b3fe" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.408620 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ec780f-f6cc-4d8d-be76-f517dff0673c" path="/var/lib/kubelet/pods/a6ec780f-f6cc-4d8d-be76-f517dff0673c/volumes" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.410180 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8a2800-04ff-44da-866f-1c4cabfe809f" path="/var/lib/kubelet/pods/ba8a2800-04ff-44da-866f-1c4cabfe809f/volumes" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.413096 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" path="/var/lib/kubelet/pods/dc01ff67-baeb-47d1-90f5-9cff65c9dffa/volumes" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.413111 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlbp4\" (UniqueName: \"kubernetes.io/projected/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-kube-api-access-dlbp4\") pod \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\" (UID: \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\") " Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.413537 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts\") pod \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\" (UID: \"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2\") " Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.414451 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9df7814f-338e-40fb-95aa-f93dfa8307d6" (UID: "9df7814f-338e-40fb-95aa-f93dfa8307d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.414795 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.414825 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.414839 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.414850 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7814f-338e-40fb-95aa-f93dfa8307d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.414861 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.414874 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/919074d0-f7a7-4d64-8339-744730688c4f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.414887 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.417544 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" path="/var/lib/kubelet/pods/e41c27f8-0c27-4e3d-83b1-62a61abb4faf/volumes" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.426054 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" (UID: "3bf19cd1-b93c-449d-ba04-7fecd2ab65e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.432030 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8d94b8e3-f4a6-4fc2-af59-57b33254cd74" (UID: "8d94b8e3-f4a6-4fc2-af59-57b33254cd74"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.438949 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-kube-api-access-dlbp4" (OuterVolumeSpecName: "kube-api-access-dlbp4") pod "3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" (UID: "3bf19cd1-b93c-449d-ba04-7fecd2ab65e2"). InnerVolumeSpecName "kube-api-access-dlbp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.466921 5017 scope.go:117] "RemoveContainer" containerID="9244b1fe8ffaffe1ec210b4f9fe46f1fb5d4f2443ca7e7e703e6ca10fd8766d0" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.468139 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7be5-account-create-update-qzv25"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.468187 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7be5-account-create-update-qzv25"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.468216 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0fca-account-create-update-7k6jq"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.468226 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0fca-account-create-update-7k6jq"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.468244 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-q6477"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.474254 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "cc46a149-0256-4061-9e32-936b2ec12588" (UID: "cc46a149-0256-4061-9e32-936b2ec12588"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.477153 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-fd15-account-create-update-q6477"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.507039 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-381c-account-create-update-ngzmx"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.515468 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-381c-account-create-update-ngzmx"] Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.516974 5017 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc46a149-0256-4061-9e32-936b2ec12588-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.516990 5017 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d94b8e3-f4a6-4fc2-af59-57b33254cd74-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.517001 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlbp4\" (UniqueName: \"kubernetes.io/projected/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-kube-api-access-dlbp4\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.517011 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.521520 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd271b62ca4015e030ca07e0ae6b52baec3e519d53550f369d0cfbcc931e68fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.527315 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd271b62ca4015e030ca07e0ae6b52baec3e519d53550f369d0cfbcc931e68fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.529417 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cd271b62ca4015e030ca07e0ae6b52baec3e519d53550f369d0cfbcc931e68fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.529514 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="18edd5b3-27eb-43f3-8d6b-03490c243c78" containerName="nova-cell1-conductor-conductor" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.619530 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkqtq\" (UniqueName: \"kubernetes.io/projected/a341d6ae-870f-4453-a804-1c0b4b43ce6f-kube-api-access-nkqtq\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.619878 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a341d6ae-870f-4453-a804-1c0b4b43ce6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.697915 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d30b013f-453f-4282-8b22-2a5270027828" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.830123 5017 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 06:57:24 crc kubenswrapper[5017]: E0129 06:57:24.830226 5017 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data podName:5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a nodeName:}" failed. No retries permitted until 2026-01-29 06:57:32.830204748 +0000 UTC m=+1339.204652358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data") pod "rabbitmq-server-0" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a") : configmap "rabbitmq-config-data" not found Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.986479 5017 generic.go:334] "Generic (PLEG): container finished" podID="9af88cca-e43b-483d-beae-d6a56940aff7" containerID="ec537ea4d90113835bbfd7b41bd980ad15b540d9718e92b22b059584ee668478" exitCode=0 Jan 29 06:57:24 crc kubenswrapper[5017]: I0129 06:57:24.987050 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af88cca-e43b-483d-beae-d6a56940aff7","Type":"ContainerDied","Data":"ec537ea4d90113835bbfd7b41bd980ad15b540d9718e92b22b059584ee668478"} Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.008390 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4qvq2" event={"ID":"3bf19cd1-b93c-449d-ba04-7fecd2ab65e2","Type":"ContainerDied","Data":"6276c128869f701d011c860f78e0e60b2462638f473965139bcb3824b0dd8b96"} Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.008450 5017 scope.go:117] "RemoveContainer" containerID="c26ced751a74c883105fc37909d9ee54edeff58bd72680f5e7e5b84c045baf04" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.008530 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4qvq2" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.033501 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-544777f6b8-l4dw8" event={"ID":"919074d0-f7a7-4d64-8339-744730688c4f","Type":"ContainerDied","Data":"b9d49293b936127c8781c9b57a3b8fd3a124e2e46c18bf13435d389658df75d9"} Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.033579 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.033626 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.033643 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.033667 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-544777f6b8-l4dw8" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.114715 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.131584 5017 scope.go:117] "RemoveContainer" containerID="290c7cb878f017a867ee8ae761d80813ae152cf14f4cb08011870623faa5a09c" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.134577 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.157738 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.175298 5017 scope.go:117] "RemoveContainer" containerID="9a97268bf48d3ec39b862fe626a036c8b8bf71b7f2894c8b7bf80ae0f1be7da0" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.187647 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-544777f6b8-l4dw8"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.206856 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-544777f6b8-l4dw8"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.239651 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-galera-tls-certs\") pod \"9af88cca-e43b-483d-beae-d6a56940aff7\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.239755 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"9af88cca-e43b-483d-beae-d6a56940aff7\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.239850 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-operator-scripts\") pod \"9af88cca-e43b-483d-beae-d6a56940aff7\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.239908 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-generated\") pod \"9af88cca-e43b-483d-beae-d6a56940aff7\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.239990 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-combined-ca-bundle\") pod \"9af88cca-e43b-483d-beae-d6a56940aff7\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.240026 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brhn7\" (UniqueName: \"kubernetes.io/projected/9af88cca-e43b-483d-beae-d6a56940aff7-kube-api-access-brhn7\") pod \"9af88cca-e43b-483d-beae-d6a56940aff7\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.240065 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-kolla-config\") pod \"9af88cca-e43b-483d-beae-d6a56940aff7\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.240105 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-default\") pod \"9af88cca-e43b-483d-beae-d6a56940aff7\" (UID: \"9af88cca-e43b-483d-beae-d6a56940aff7\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.241375 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "9af88cca-e43b-483d-beae-d6a56940aff7" (UID: "9af88cca-e43b-483d-beae-d6a56940aff7"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.241890 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9af88cca-e43b-483d-beae-d6a56940aff7" (UID: "9af88cca-e43b-483d-beae-d6a56940aff7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.242250 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "9af88cca-e43b-483d-beae-d6a56940aff7" (UID: "9af88cca-e43b-483d-beae-d6a56940aff7"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.242532 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9af88cca-e43b-483d-beae-d6a56940aff7" (UID: "9af88cca-e43b-483d-beae-d6a56940aff7"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.245945 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4qvq2"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.254006 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4qvq2"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.254292 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af88cca-e43b-483d-beae-d6a56940aff7-kube-api-access-brhn7" (OuterVolumeSpecName: "kube-api-access-brhn7") pod "9af88cca-e43b-483d-beae-d6a56940aff7" (UID: "9af88cca-e43b-483d-beae-d6a56940aff7"). InnerVolumeSpecName "kube-api-access-brhn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.261013 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.267299 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.274462 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.281021 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.283899 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "9af88cca-e43b-483d-beae-d6a56940aff7" (UID: "9af88cca-e43b-483d-beae-d6a56940aff7"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.311673 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9af88cca-e43b-483d-beae-d6a56940aff7" (UID: "9af88cca-e43b-483d-beae-d6a56940aff7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.335323 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "9af88cca-e43b-483d-beae-d6a56940aff7" (UID: "9af88cca-e43b-483d-beae-d6a56940aff7"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.344351 5017 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.344417 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.344434 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.344446 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.344455 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af88cca-e43b-483d-beae-d6a56940aff7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.344465 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brhn7\" (UniqueName: \"kubernetes.io/projected/9af88cca-e43b-483d-beae-d6a56940aff7-kube-api-access-brhn7\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.344476 5017 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.344487 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9af88cca-e43b-483d-beae-d6a56940aff7-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.367934 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.377063 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.446626 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.773290 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:57:25 crc kubenswrapper[5017]: E0129 06:57:25.776403 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:57:25 crc kubenswrapper[5017]: E0129 06:57:25.779044 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:57:25 crc kubenswrapper[5017]: E0129 06:57:25.780484 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 06:57:25 crc kubenswrapper[5017]: E0129 06:57:25.780524 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" containerName="nova-scheduler-scheduler" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.854202 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d30b013f-453f-4282-8b22-2a5270027828-pod-info\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.854288 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.854347 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-plugins\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.854374 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-server-conf\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.854446 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.855177 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.855280 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-confd\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.855322 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-tls\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.855368 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h9jd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-kube-api-access-2h9jd\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.855447 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d30b013f-453f-4282-8b22-2a5270027828-erlang-cookie-secret\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.855492 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-erlang-cookie\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.855559 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-plugins-conf\") pod \"d30b013f-453f-4282-8b22-2a5270027828\" (UID: \"d30b013f-453f-4282-8b22-2a5270027828\") " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.855911 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.856137 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.868534 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.881296 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.885319 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.885430 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30b013f-453f-4282-8b22-2a5270027828-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.902799 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d30b013f-453f-4282-8b22-2a5270027828-pod-info" (OuterVolumeSpecName: "pod-info") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.907548 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-kube-api-access-2h9jd" (OuterVolumeSpecName: "kube-api-access-2h9jd") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "kube-api-access-2h9jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.921599 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data" (OuterVolumeSpecName: "config-data") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.946655 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.957636 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.957672 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h9jd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-kube-api-access-2h9jd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.957684 5017 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d30b013f-453f-4282-8b22-2a5270027828-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.957694 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.957725 5017 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.957735 5017 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d30b013f-453f-4282-8b22-2a5270027828-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.957760 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.957770 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:25 crc kubenswrapper[5017]: I0129 06:57:25.971166 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-server-conf" (OuterVolumeSpecName: "server-conf") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.013749 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.046787 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d30b013f-453f-4282-8b22-2a5270027828" (UID: "d30b013f-453f-4282-8b22-2a5270027828"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.056777 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.058434 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm2zt\" (UniqueName: \"kubernetes.io/projected/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-kube-api-access-pm2zt\") pod \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.058473 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-internal-tls-certs\") pod \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.058550 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-logs\") pod \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.058628 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-combined-ca-bundle\") pod \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.058686 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-public-tls-certs\") pod \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.058717 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-config-data\") pod \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\" (UID: \"7d3e4a4d-ee9a-4345-b8e5-a40416771caf\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.059068 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.059087 5017 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d30b013f-453f-4282-8b22-2a5270027828-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.059098 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d30b013f-453f-4282-8b22-2a5270027828-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.059264 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-logs" (OuterVolumeSpecName: "logs") pod "7d3e4a4d-ee9a-4345-b8e5-a40416771caf" (UID: "7d3e4a4d-ee9a-4345-b8e5-a40416771caf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.064704 5017 generic.go:334] "Generic (PLEG): container finished" podID="d30b013f-453f-4282-8b22-2a5270027828" containerID="023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d" exitCode=0 Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.064833 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.064847 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d30b013f-453f-4282-8b22-2a5270027828","Type":"ContainerDied","Data":"023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d"} Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.064925 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d30b013f-453f-4282-8b22-2a5270027828","Type":"ContainerDied","Data":"73bca61a4de82d0b94453a14f55affd2ad9d19c9af4fc95e53d4f3a04ce53200"} Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.064971 5017 scope.go:117] "RemoveContainer" containerID="023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.072699 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-kube-api-access-pm2zt" (OuterVolumeSpecName: "kube-api-access-pm2zt") pod "7d3e4a4d-ee9a-4345-b8e5-a40416771caf" (UID: "7d3e4a4d-ee9a-4345-b8e5-a40416771caf"). InnerVolumeSpecName "kube-api-access-pm2zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.074765 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9af88cca-e43b-483d-beae-d6a56940aff7","Type":"ContainerDied","Data":"0fec5135c279637e92e3b0c3d2b0bf33aa48e0bf317089e9a5a7e6486f6514c8"} Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.074890 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.084220 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.084227 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3e4a4d-ee9a-4345-b8e5-a40416771caf","Type":"ContainerDied","Data":"2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b"} Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.083990 5017 generic.go:334] "Generic (PLEG): container finished" podID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerID="2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b" exitCode=0 Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.084972 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3e4a4d-ee9a-4345-b8e5-a40416771caf","Type":"ContainerDied","Data":"5abd7c85607df012d69d67cfc19e316c675dcdebf6951845245b25e10b7e6c24"} Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.088258 5017 generic.go:334] "Generic (PLEG): container finished" podID="55d2d70d-8578-47fc-a3a7-df7694c3f2a3" containerID="7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5" exitCode=0 Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.088495 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74d8b8b54b-w68vj" event={"ID":"55d2d70d-8578-47fc-a3a7-df7694c3f2a3","Type":"ContainerDied","Data":"7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5"} Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.088658 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74d8b8b54b-w68vj" event={"ID":"55d2d70d-8578-47fc-a3a7-df7694c3f2a3","Type":"ContainerDied","Data":"7a46e79071a81f4c50cf3ec04293853707a5af6efc286f2b0229471a3f0428bd"} Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.088783 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74d8b8b54b-w68vj" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.101581 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-config-data" (OuterVolumeSpecName: "config-data") pod "7d3e4a4d-ee9a-4345-b8e5-a40416771caf" (UID: "7d3e4a4d-ee9a-4345-b8e5-a40416771caf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.108559 5017 scope.go:117] "RemoveContainer" containerID="afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.112196 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7d3e4a4d-ee9a-4345-b8e5-a40416771caf" (UID: "7d3e4a4d-ee9a-4345-b8e5-a40416771caf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.114477 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.117452 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.119989 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.120053 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" containerName="nova-cell0-conductor-conductor" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.139633 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d3e4a4d-ee9a-4345-b8e5-a40416771caf" (UID: "7d3e4a4d-ee9a-4345-b8e5-a40416771caf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.160995 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7d3e4a4d-ee9a-4345-b8e5-a40416771caf" (UID: "7d3e4a4d-ee9a-4345-b8e5-a40416771caf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.161258 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-scripts\") pod \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.161371 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-fernet-keys\") pod \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.161400 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tqxn\" (UniqueName: \"kubernetes.io/projected/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-kube-api-access-5tqxn\") pod \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.161449 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-credential-keys\") pod \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.161477 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-combined-ca-bundle\") pod \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.161572 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-config-data\") pod \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.161610 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-internal-tls-certs\") pod \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.161686 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-public-tls-certs\") pod \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\" (UID: \"55d2d70d-8578-47fc-a3a7-df7694c3f2a3\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.163919 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm2zt\" (UniqueName: \"kubernetes.io/projected/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-kube-api-access-pm2zt\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.164077 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.164092 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.164105 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.164116 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.164127 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3e4a4d-ee9a-4345-b8e5-a40416771caf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.165055 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-kube-api-access-5tqxn" (OuterVolumeSpecName: "kube-api-access-5tqxn") pod "55d2d70d-8578-47fc-a3a7-df7694c3f2a3" (UID: "55d2d70d-8578-47fc-a3a7-df7694c3f2a3"). InnerVolumeSpecName "kube-api-access-5tqxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.169213 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-scripts" (OuterVolumeSpecName: "scripts") pod "55d2d70d-8578-47fc-a3a7-df7694c3f2a3" (UID: "55d2d70d-8578-47fc-a3a7-df7694c3f2a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.172506 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "55d2d70d-8578-47fc-a3a7-df7694c3f2a3" (UID: "55d2d70d-8578-47fc-a3a7-df7694c3f2a3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.175468 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "55d2d70d-8578-47fc-a3a7-df7694c3f2a3" (UID: "55d2d70d-8578-47fc-a3a7-df7694c3f2a3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.200763 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.213622 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.223540 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.227109 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55d2d70d-8578-47fc-a3a7-df7694c3f2a3" (UID: "55d2d70d-8578-47fc-a3a7-df7694c3f2a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.227820 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55d2d70d-8578-47fc-a3a7-df7694c3f2a3" (UID: "55d2d70d-8578-47fc-a3a7-df7694c3f2a3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.228725 5017 scope.go:117] "RemoveContainer" containerID="023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d" Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.230664 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d\": container with ID starting with 023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d not found: ID does not exist" containerID="023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.230718 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d"} err="failed to get container status \"023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d\": rpc error: code = NotFound desc = could not find container \"023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d\": container with ID starting with 023038cdbf9696dd93c47d604450e36f0a426b06c4dd797fb3a475e6afec3f3d not found: ID does not exist" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.230754 5017 scope.go:117] "RemoveContainer" containerID="afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913" Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.232913 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913\": container with ID starting with afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913 not found: ID does not exist" containerID="afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.232993 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913"} err="failed to get container status \"afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913\": rpc error: code = NotFound desc = could not find container \"afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913\": container with ID starting with afda212927d2b2e8cc6edc8253fe7720c2d3d32a79a9247dbc74ea8bdcd22913 not found: ID does not exist" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.233031 5017 scope.go:117] "RemoveContainer" containerID="ec537ea4d90113835bbfd7b41bd980ad15b540d9718e92b22b059584ee668478" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.236903 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-config-data" (OuterVolumeSpecName: "config-data") pod "55d2d70d-8578-47fc-a3a7-df7694c3f2a3" (UID: "55d2d70d-8578-47fc-a3a7-df7694c3f2a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.238534 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.244757 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "55d2d70d-8578-47fc-a3a7-df7694c3f2a3" (UID: "55d2d70d-8578-47fc-a3a7-df7694c3f2a3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.267391 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.267429 5017 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.267447 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tqxn\" (UniqueName: \"kubernetes.io/projected/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-kube-api-access-5tqxn\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.267458 5017 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.267468 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.267476 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.267486 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.267496 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d2d70d-8578-47fc-a3a7-df7694c3f2a3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.304664 5017 scope.go:117] "RemoveContainer" containerID="3bbbc1fc8a72c66dc23676db10ea62a991b138c8e95fdb7d35a472153c5b43f7" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.332024 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0adabddf-74aa-416a-afef-b24b39897f9c" path="/var/lib/kubelet/pods/0adabddf-74aa-416a-afef-b24b39897f9c/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.332433 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" path="/var/lib/kubelet/pods/3bf19cd1-b93c-449d-ba04-7fecd2ab65e2/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.333102 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66475828-326a-4b57-baea-e209e519d639" path="/var/lib/kubelet/pods/66475828-326a-4b57-baea-e209e519d639/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.333571 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" path="/var/lib/kubelet/pods/8d94b8e3-f4a6-4fc2-af59-57b33254cd74/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.334821 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919074d0-f7a7-4d64-8339-744730688c4f" path="/var/lib/kubelet/pods/919074d0-f7a7-4d64-8339-744730688c4f/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.335486 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af88cca-e43b-483d-beae-d6a56940aff7" path="/var/lib/kubelet/pods/9af88cca-e43b-483d-beae-d6a56940aff7/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.336702 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" path="/var/lib/kubelet/pods/9df7814f-338e-40fb-95aa-f93dfa8307d6/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.337606 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a341d6ae-870f-4453-a804-1c0b4b43ce6f" path="/var/lib/kubelet/pods/a341d6ae-870f-4453-a804-1c0b4b43ce6f/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.337913 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c55eb047-3255-4a79-8e32-dfb786de8794" path="/var/lib/kubelet/pods/c55eb047-3255-4a79-8e32-dfb786de8794/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.338478 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc46a149-0256-4061-9e32-936b2ec12588" path="/var/lib/kubelet/pods/cc46a149-0256-4061-9e32-936b2ec12588/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.339835 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30b013f-453f-4282-8b22-2a5270027828" path="/var/lib/kubelet/pods/d30b013f-453f-4282-8b22-2a5270027828/volumes" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.351444 5017 scope.go:117] "RemoveContainer" containerID="2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.391039 5017 scope.go:117] "RemoveContainer" containerID="e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.440694 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.451736 5017 scope.go:117] "RemoveContainer" containerID="2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.451920 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.454697 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b\": container with ID starting with 2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b not found: ID does not exist" containerID="2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.454734 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b"} err="failed to get container status \"2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b\": rpc error: code = NotFound desc = could not find container \"2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b\": container with ID starting with 2065ba1d7b19abf8c6153b5871f31a1505c3bfdacc36878ce272c58a274b2b1b not found: ID does not exist" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.454769 5017 scope.go:117] "RemoveContainer" containerID="e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b" Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.455161 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b\": container with ID starting with e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b not found: ID does not exist" containerID="e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.455186 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b"} err="failed to get container status \"e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b\": rpc error: code = NotFound desc = could not find container \"e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b\": container with ID starting with e12411a150cd5f0172fa3185d950a4267453e35e9b6242870bba015ffef1b16b not found: ID does not exist" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.455202 5017 scope.go:117] "RemoveContainer" containerID="7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.456372 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-74d8b8b54b-w68vj"] Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.460809 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-74d8b8b54b-w68vj"] Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.538909 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.538994 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.574677 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.579901 5017 scope.go:117] "RemoveContainer" containerID="7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5" Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.583046 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5\": container with ID starting with 7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5 not found: ID does not exist" containerID="7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.583084 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5"} err="failed to get container status \"7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5\": rpc error: code = NotFound desc = could not find container \"7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5\": container with ID starting with 7e081afe27462760e218795e4ff98c49cc0403ff7cf82b6262d050fe3d08e3d5 not found: ID does not exist" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.676786 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-confd\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.676868 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-erlang-cookie\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.676919 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-plugins-conf\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.677035 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-tls\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.677105 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m7bw\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-kube-api-access-5m7bw\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.677133 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-erlang-cookie-secret\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.677154 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.677192 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-server-conf\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.677266 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-pod-info\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.677352 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-plugins\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.677402 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data\") pod \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\" (UID: \"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a\") " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.681743 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.683015 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-kube-api-access-5m7bw" (OuterVolumeSpecName: "kube-api-access-5m7bw") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "kube-api-access-5m7bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.685015 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.685620 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.685896 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.689219 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-pod-info" (OuterVolumeSpecName: "pod-info") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.696515 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.696559 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.714124 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data" (OuterVolumeSpecName: "config-data") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.727651 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-server-conf" (OuterVolumeSpecName: "server-conf") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.766815 5017 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 29 06:57:26 crc kubenswrapper[5017]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-29T06:57:19Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 06:57:26 crc kubenswrapper[5017]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 29 06:57:26 crc kubenswrapper[5017]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-rtkrb" message=< Jan 29 06:57:26 crc kubenswrapper[5017]: Exiting ovn-controller (1) [FAILED] Jan 29 06:57:26 crc kubenswrapper[5017]: Killing ovn-controller (1) [ OK ] Jan 29 06:57:26 crc kubenswrapper[5017]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 29 06:57:26 crc kubenswrapper[5017]: 2026-01-29T06:57:19Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 06:57:26 crc kubenswrapper[5017]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 29 06:57:26 crc kubenswrapper[5017]: > Jan 29 06:57:26 crc kubenswrapper[5017]: E0129 06:57:26.766875 5017 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 29 06:57:26 crc kubenswrapper[5017]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-29T06:57:19Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 06:57:26 crc kubenswrapper[5017]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 29 06:57:26 crc kubenswrapper[5017]: > pod="openstack/ovn-controller-rtkrb" podUID="4c57c864-37e8-46b9-b30d-1762f3858984" containerName="ovn-controller" containerID="cri-o://050af2fa573ee2d4aef1b31ce94271336586d6f6093ebcaf8cd51c675652f905" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.767685 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-rtkrb" podUID="4c57c864-37e8-46b9-b30d-1762f3858984" containerName="ovn-controller" containerID="cri-o://050af2fa573ee2d4aef1b31ce94271336586d6f6093ebcaf8cd51c675652f905" gracePeriod=21 Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.779793 5017 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.779830 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.779845 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m7bw\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-kube-api-access-5m7bw\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.779861 5017 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.779981 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.779999 5017 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.780013 5017 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.780027 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.780040 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.780053 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.790539 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" (UID: "5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.798453 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.890102 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:26 crc kubenswrapper[5017]: I0129 06:57:26.890585 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.119622 5017 generic.go:334] "Generic (PLEG): container finished" podID="18edd5b3-27eb-43f3-8d6b-03490c243c78" containerID="cd271b62ca4015e030ca07e0ae6b52baec3e519d53550f369d0cfbcc931e68fd" exitCode=0 Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.119743 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"18edd5b3-27eb-43f3-8d6b-03490c243c78","Type":"ContainerDied","Data":"cd271b62ca4015e030ca07e0ae6b52baec3e519d53550f369d0cfbcc931e68fd"} Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.120204 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rtkrb_4c57c864-37e8-46b9-b30d-1762f3858984/ovn-controller/0.log" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.120278 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.132292 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rtkrb_4c57c864-37e8-46b9-b30d-1762f3858984/ovn-controller/0.log" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.132346 5017 generic.go:334] "Generic (PLEG): container finished" podID="4c57c864-37e8-46b9-b30d-1762f3858984" containerID="050af2fa573ee2d4aef1b31ce94271336586d6f6093ebcaf8cd51c675652f905" exitCode=137 Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.132414 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rtkrb" event={"ID":"4c57c864-37e8-46b9-b30d-1762f3858984","Type":"ContainerDied","Data":"050af2fa573ee2d4aef1b31ce94271336586d6f6093ebcaf8cd51c675652f905"} Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.132471 5017 scope.go:117] "RemoveContainer" containerID="050af2fa573ee2d4aef1b31ce94271336586d6f6093ebcaf8cd51c675652f905" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.133094 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rtkrb" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.145313 5017 generic.go:334] "Generic (PLEG): container finished" podID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerID="31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a" exitCode=0 Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.145380 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a","Type":"ContainerDied","Data":"31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a"} Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.145412 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a","Type":"ContainerDied","Data":"efc0ef0c88363f230aa57bc0b472deaabef0bff3fed1c77fa3da2720414d7710"} Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.145495 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.197087 5017 scope.go:117] "RemoveContainer" containerID="31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.197921 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-combined-ca-bundle\") pod \"4c57c864-37e8-46b9-b30d-1762f3858984\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198074 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-ovn-controller-tls-certs\") pod \"4c57c864-37e8-46b9-b30d-1762f3858984\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198117 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run-ovn\") pod \"4c57c864-37e8-46b9-b30d-1762f3858984\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198196 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c57c864-37e8-46b9-b30d-1762f3858984-scripts\") pod \"4c57c864-37e8-46b9-b30d-1762f3858984\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198254 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run\") pod \"4c57c864-37e8-46b9-b30d-1762f3858984\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198298 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c65nd\" (UniqueName: \"kubernetes.io/projected/4c57c864-37e8-46b9-b30d-1762f3858984-kube-api-access-c65nd\") pod \"4c57c864-37e8-46b9-b30d-1762f3858984\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198320 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-log-ovn\") pod \"4c57c864-37e8-46b9-b30d-1762f3858984\" (UID: \"4c57c864-37e8-46b9-b30d-1762f3858984\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198475 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4c57c864-37e8-46b9-b30d-1762f3858984" (UID: "4c57c864-37e8-46b9-b30d-1762f3858984"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198550 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4c57c864-37e8-46b9-b30d-1762f3858984" (UID: "4c57c864-37e8-46b9-b30d-1762f3858984"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198884 5017 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.198900 5017 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.199840 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c57c864-37e8-46b9-b30d-1762f3858984-scripts" (OuterVolumeSpecName: "scripts") pod "4c57c864-37e8-46b9-b30d-1762f3858984" (UID: "4c57c864-37e8-46b9-b30d-1762f3858984"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.199886 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run" (OuterVolumeSpecName: "var-run") pod "4c57c864-37e8-46b9-b30d-1762f3858984" (UID: "4c57c864-37e8-46b9-b30d-1762f3858984"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.227784 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c57c864-37e8-46b9-b30d-1762f3858984-kube-api-access-c65nd" (OuterVolumeSpecName: "kube-api-access-c65nd") pod "4c57c864-37e8-46b9-b30d-1762f3858984" (UID: "4c57c864-37e8-46b9-b30d-1762f3858984"). InnerVolumeSpecName "kube-api-access-c65nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.238831 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.241380 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.246290 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c57c864-37e8-46b9-b30d-1762f3858984" (UID: "4c57c864-37e8-46b9-b30d-1762f3858984"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.264242 5017 scope.go:117] "RemoveContainer" containerID="5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.289354 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "4c57c864-37e8-46b9-b30d-1762f3858984" (UID: "4c57c864-37e8-46b9-b30d-1762f3858984"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.292683 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.300059 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c57c864-37e8-46b9-b30d-1762f3858984-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.300105 5017 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c57c864-37e8-46b9-b30d-1762f3858984-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.300123 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c65nd\" (UniqueName: \"kubernetes.io/projected/4c57c864-37e8-46b9-b30d-1762f3858984-kube-api-access-c65nd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.300136 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.300146 5017 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c57c864-37e8-46b9-b30d-1762f3858984-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.304163 5017 scope.go:117] "RemoveContainer" containerID="31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a" Jan 29 06:57:27 crc kubenswrapper[5017]: E0129 06:57:27.306248 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a\": container with ID starting with 31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a not found: ID does not exist" containerID="31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.306278 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a"} err="failed to get container status \"31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a\": rpc error: code = NotFound desc = could not find container \"31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a\": container with ID starting with 31ad6d38c7cac48292e7dddc2454aef26b5d737c3c99ba9b9446defaee858a9a not found: ID does not exist" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.306300 5017 scope.go:117] "RemoveContainer" containerID="5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb" Jan 29 06:57:27 crc kubenswrapper[5017]: E0129 06:57:27.310043 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb\": container with ID starting with 5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb not found: ID does not exist" containerID="5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.310064 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb"} err="failed to get container status \"5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb\": rpc error: code = NotFound desc = could not find container \"5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb\": container with ID starting with 5620e40e37943966428bb68f9e7fc93c03515596e012174ed760e46fcfaeefcb not found: ID does not exist" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.401905 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-combined-ca-bundle\") pod \"18edd5b3-27eb-43f3-8d6b-03490c243c78\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.402164 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbwpd\" (UniqueName: \"kubernetes.io/projected/18edd5b3-27eb-43f3-8d6b-03490c243c78-kube-api-access-nbwpd\") pod \"18edd5b3-27eb-43f3-8d6b-03490c243c78\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.402234 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-config-data\") pod \"18edd5b3-27eb-43f3-8d6b-03490c243c78\" (UID: \"18edd5b3-27eb-43f3-8d6b-03490c243c78\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.405942 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18edd5b3-27eb-43f3-8d6b-03490c243c78-kube-api-access-nbwpd" (OuterVolumeSpecName: "kube-api-access-nbwpd") pod "18edd5b3-27eb-43f3-8d6b-03490c243c78" (UID: "18edd5b3-27eb-43f3-8d6b-03490c243c78"). InnerVolumeSpecName "kube-api-access-nbwpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.422058 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-config-data" (OuterVolumeSpecName: "config-data") pod "18edd5b3-27eb-43f3-8d6b-03490c243c78" (UID: "18edd5b3-27eb-43f3-8d6b-03490c243c78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.442187 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18edd5b3-27eb-43f3-8d6b-03490c243c78" (UID: "18edd5b3-27eb-43f3-8d6b-03490c243c78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.504313 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.504369 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbwpd\" (UniqueName: \"kubernetes.io/projected/18edd5b3-27eb-43f3-8d6b-03490c243c78-kube-api-access-nbwpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.504390 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18edd5b3-27eb-43f3-8d6b-03490c243c78-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.809172 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.816083 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rtkrb"] Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.824628 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rtkrb"] Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.911929 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data-custom\") pod \"da406cff-454a-4287-a409-5ad51c535649\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.912026 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvgdx\" (UniqueName: \"kubernetes.io/projected/da406cff-454a-4287-a409-5ad51c535649-kube-api-access-mvgdx\") pod \"da406cff-454a-4287-a409-5ad51c535649\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.912160 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da406cff-454a-4287-a409-5ad51c535649-logs\") pod \"da406cff-454a-4287-a409-5ad51c535649\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.912402 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-combined-ca-bundle\") pod \"da406cff-454a-4287-a409-5ad51c535649\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.912451 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data\") pod \"da406cff-454a-4287-a409-5ad51c535649\" (UID: \"da406cff-454a-4287-a409-5ad51c535649\") " Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.914359 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da406cff-454a-4287-a409-5ad51c535649-logs" (OuterVolumeSpecName: "logs") pod "da406cff-454a-4287-a409-5ad51c535649" (UID: "da406cff-454a-4287-a409-5ad51c535649"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.934036 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da406cff-454a-4287-a409-5ad51c535649-kube-api-access-mvgdx" (OuterVolumeSpecName: "kube-api-access-mvgdx") pod "da406cff-454a-4287-a409-5ad51c535649" (UID: "da406cff-454a-4287-a409-5ad51c535649"). InnerVolumeSpecName "kube-api-access-mvgdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.934399 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "da406cff-454a-4287-a409-5ad51c535649" (UID: "da406cff-454a-4287-a409-5ad51c535649"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.941215 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da406cff-454a-4287-a409-5ad51c535649" (UID: "da406cff-454a-4287-a409-5ad51c535649"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.980108 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data" (OuterVolumeSpecName: "config-data") pod "da406cff-454a-4287-a409-5ad51c535649" (UID: "da406cff-454a-4287-a409-5ad51c535649"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:27 crc kubenswrapper[5017]: I0129 06:57:27.994361 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.018453 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da406cff-454a-4287-a409-5ad51c535649-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.018492 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.018506 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.018516 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da406cff-454a-4287-a409-5ad51c535649-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.018527 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvgdx\" (UniqueName: \"kubernetes.io/projected/da406cff-454a-4287-a409-5ad51c535649-kube-api-access-mvgdx\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.116520 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.119072 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-run-httpd\") pod \"3131ebc7-0955-4d4d-8444-057df1cc52f1\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.119130 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-config-data\") pod \"3131ebc7-0955-4d4d-8444-057df1cc52f1\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.119259 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-ceilometer-tls-certs\") pod \"3131ebc7-0955-4d4d-8444-057df1cc52f1\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.119289 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-combined-ca-bundle\") pod \"3131ebc7-0955-4d4d-8444-057df1cc52f1\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.119566 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3131ebc7-0955-4d4d-8444-057df1cc52f1" (UID: "3131ebc7-0955-4d4d-8444-057df1cc52f1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.136335 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr74p\" (UniqueName: \"kubernetes.io/projected/3131ebc7-0955-4d4d-8444-057df1cc52f1-kube-api-access-rr74p\") pod \"3131ebc7-0955-4d4d-8444-057df1cc52f1\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.136412 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-scripts\") pod \"3131ebc7-0955-4d4d-8444-057df1cc52f1\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.136741 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-log-httpd\") pod \"3131ebc7-0955-4d4d-8444-057df1cc52f1\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.136862 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-sg-core-conf-yaml\") pod \"3131ebc7-0955-4d4d-8444-057df1cc52f1\" (UID: \"3131ebc7-0955-4d4d-8444-057df1cc52f1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.137721 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3131ebc7-0955-4d4d-8444-057df1cc52f1" (UID: "3131ebc7-0955-4d4d-8444-057df1cc52f1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.138020 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.138051 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3131ebc7-0955-4d4d-8444-057df1cc52f1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.140882 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-scripts" (OuterVolumeSpecName: "scripts") pod "3131ebc7-0955-4d4d-8444-057df1cc52f1" (UID: "3131ebc7-0955-4d4d-8444-057df1cc52f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.151648 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3131ebc7-0955-4d4d-8444-057df1cc52f1-kube-api-access-rr74p" (OuterVolumeSpecName: "kube-api-access-rr74p") pod "3131ebc7-0955-4d4d-8444-057df1cc52f1" (UID: "3131ebc7-0955-4d4d-8444-057df1cc52f1"). InnerVolumeSpecName "kube-api-access-rr74p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.188601 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"18edd5b3-27eb-43f3-8d6b-03490c243c78","Type":"ContainerDied","Data":"1ee7b87b7b7508d57233e55982afd114b99a55ec9e113941fb10b8d1395132ac"} Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.188684 5017 scope.go:117] "RemoveContainer" containerID="cd271b62ca4015e030ca07e0ae6b52baec3e519d53550f369d0cfbcc931e68fd" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.189028 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.203054 5017 generic.go:334] "Generic (PLEG): container finished" podID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerID="75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2" exitCode=0 Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.203190 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.203298 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerDied","Data":"75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2"} Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.203368 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3131ebc7-0955-4d4d-8444-057df1cc52f1","Type":"ContainerDied","Data":"ead5beca6ebc7ac46fa200b641bfd0dcaf21d1f5f74978a5b1d66aa82190d1b9"} Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.204843 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.210034 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3131ebc7-0955-4d4d-8444-057df1cc52f1" (UID: "3131ebc7-0955-4d4d-8444-057df1cc52f1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.226888 5017 generic.go:334] "Generic (PLEG): container finished" podID="da406cff-454a-4287-a409-5ad51c535649" containerID="2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc" exitCode=0 Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.227081 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" event={"ID":"da406cff-454a-4287-a409-5ad51c535649","Type":"ContainerDied","Data":"2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc"} Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.227119 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" event={"ID":"da406cff-454a-4287-a409-5ad51c535649","Type":"ContainerDied","Data":"08c84617f8fd7d5dedc44bf1a7ce27c779bd9bf599b8449d46fa4ea59ab7127c"} Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.227185 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59754c55b6-52c5s" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.227745 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3131ebc7-0955-4d4d-8444-057df1cc52f1" (UID: "3131ebc7-0955-4d4d-8444-057df1cc52f1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.235655 5017 scope.go:117] "RemoveContainer" containerID="61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.235870 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dd895bb69-2ngwr" event={"ID":"c118297d-1c5d-4234-930c-9c0e6b5bb29b","Type":"ContainerDied","Data":"d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e"} Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.235870 5017 generic.go:334] "Generic (PLEG): container finished" podID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerID="d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e" exitCode=0 Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.236014 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dd895bb69-2ngwr" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.236145 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dd895bb69-2ngwr" event={"ID":"c118297d-1c5d-4234-930c-9c0e6b5bb29b","Type":"ContainerDied","Data":"5719ab7c6b932755168b57ebbb704c2aa3a3d0069b5e7afb8cdc762eb568de23"} Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.242243 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-combined-ca-bundle\") pod \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.242511 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data\") pod \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.242564 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118297d-1c5d-4234-930c-9c0e6b5bb29b-logs\") pod \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.242604 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtmsh\" (UniqueName: \"kubernetes.io/projected/c118297d-1c5d-4234-930c-9c0e6b5bb29b-kube-api-access-gtmsh\") pod \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.242705 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data-custom\") pod \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\" (UID: \"c118297d-1c5d-4234-930c-9c0e6b5bb29b\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.248138 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c118297d-1c5d-4234-930c-9c0e6b5bb29b-logs" (OuterVolumeSpecName: "logs") pod "c118297d-1c5d-4234-930c-9c0e6b5bb29b" (UID: "c118297d-1c5d-4234-930c-9c0e6b5bb29b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.250099 5017 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.250146 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c118297d-1c5d-4234-930c-9c0e6b5bb29b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.250161 5017 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.250177 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr74p\" (UniqueName: \"kubernetes.io/projected/3131ebc7-0955-4d4d-8444-057df1cc52f1-kube-api-access-rr74p\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.250191 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.261950 5017 generic.go:334] "Generic (PLEG): container finished" podID="a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" containerID="baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc" exitCode=0 Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.262069 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b","Type":"ContainerDied","Data":"baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc"} Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.262172 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.267288 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c118297d-1c5d-4234-930c-9c0e6b5bb29b" (UID: "c118297d-1c5d-4234-930c-9c0e6b5bb29b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.277372 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c118297d-1c5d-4234-930c-9c0e6b5bb29b-kube-api-access-gtmsh" (OuterVolumeSpecName: "kube-api-access-gtmsh") pod "c118297d-1c5d-4234-930c-9c0e6b5bb29b" (UID: "c118297d-1c5d-4234-930c-9c0e6b5bb29b"). InnerVolumeSpecName "kube-api-access-gtmsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.278476 5017 generic.go:334] "Generic (PLEG): container finished" podID="aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" containerID="86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547" exitCode=0 Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.278541 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1","Type":"ContainerDied","Data":"86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547"} Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.278593 5017 scope.go:117] "RemoveContainer" containerID="be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.283814 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.286855 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c118297d-1c5d-4234-930c-9c0e6b5bb29b" (UID: "c118297d-1c5d-4234-930c-9c0e6b5bb29b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.312066 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data" (OuterVolumeSpecName: "config-data") pod "c118297d-1c5d-4234-930c-9c0e6b5bb29b" (UID: "c118297d-1c5d-4234-930c-9c0e6b5bb29b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.314713 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-config-data" (OuterVolumeSpecName: "config-data") pod "3131ebc7-0955-4d4d-8444-057df1cc52f1" (UID: "3131ebc7-0955-4d4d-8444-057df1cc52f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.317416 5017 scope.go:117] "RemoveContainer" containerID="75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.326637 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3131ebc7-0955-4d4d-8444-057df1cc52f1" (UID: "3131ebc7-0955-4d4d-8444-057df1cc52f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.330425 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c57c864-37e8-46b9-b30d-1762f3858984" path="/var/lib/kubelet/pods/4c57c864-37e8-46b9-b30d-1762f3858984/volumes" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.331018 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d2d70d-8578-47fc-a3a7-df7694c3f2a3" path="/var/lib/kubelet/pods/55d2d70d-8578-47fc-a3a7-df7694c3f2a3/volumes" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.331748 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" path="/var/lib/kubelet/pods/5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a/volumes" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.332895 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" path="/var/lib/kubelet/pods/7d3e4a4d-ee9a-4345-b8e5-a40416771caf/volumes" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.336159 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.336223 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-59754c55b6-52c5s"] Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.337702 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-59754c55b6-52c5s"] Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.341704 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.350942 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjktv\" (UniqueName: \"kubernetes.io/projected/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-kube-api-access-pjktv\") pod \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.351195 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-combined-ca-bundle\") pod \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.351270 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-config-data\") pod \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\" (UID: \"a909c2d3-90a4-41a0-af8e-ddb69ed4f41b\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.355607 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.355670 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.355682 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.355693 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c118297d-1c5d-4234-930c-9c0e6b5bb29b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.355704 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtmsh\" (UniqueName: \"kubernetes.io/projected/c118297d-1c5d-4234-930c-9c0e6b5bb29b-kube-api-access-gtmsh\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.355716 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3131ebc7-0955-4d4d-8444-057df1cc52f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.355751 5017 scope.go:117] "RemoveContainer" containerID="527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.361031 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-kube-api-access-pjktv" (OuterVolumeSpecName: "kube-api-access-pjktv") pod "a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" (UID: "a909c2d3-90a4-41a0-af8e-ddb69ed4f41b"). InnerVolumeSpecName "kube-api-access-pjktv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.382206 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" (UID: "a909c2d3-90a4-41a0-af8e-ddb69ed4f41b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.382635 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-config-data" (OuterVolumeSpecName: "config-data") pod "a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" (UID: "a909c2d3-90a4-41a0-af8e-ddb69ed4f41b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.383674 5017 scope.go:117] "RemoveContainer" containerID="61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3" Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.384676 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3\": container with ID starting with 61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3 not found: ID does not exist" containerID="61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.384747 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3"} err="failed to get container status \"61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3\": rpc error: code = NotFound desc = could not find container \"61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3\": container with ID starting with 61f4dffdd2cc2b445230967b2177b548f4408eb3732a47610acca18c5946b0d3 not found: ID does not exist" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.384779 5017 scope.go:117] "RemoveContainer" containerID="be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4" Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.385140 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4\": container with ID starting with be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4 not found: ID does not exist" containerID="be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.385172 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4"} err="failed to get container status \"be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4\": rpc error: code = NotFound desc = could not find container \"be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4\": container with ID starting with be24094753ade4791793ba42b3d49712012104ef86a88d92d47df03f0450fcb4 not found: ID does not exist" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.385236 5017 scope.go:117] "RemoveContainer" containerID="75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2" Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.385554 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2\": container with ID starting with 75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2 not found: ID does not exist" containerID="75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.385584 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2"} err="failed to get container status \"75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2\": rpc error: code = NotFound desc = could not find container \"75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2\": container with ID starting with 75cec19a1205e625b426f312c7dda16e9246d7c67ccd89220fd9ecea834f58f2 not found: ID does not exist" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.385618 5017 scope.go:117] "RemoveContainer" containerID="527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f" Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.385893 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f\": container with ID starting with 527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f not found: ID does not exist" containerID="527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.385926 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f"} err="failed to get container status \"527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f\": rpc error: code = NotFound desc = could not find container \"527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f\": container with ID starting with 527c5a9750ed60502bff4d26ac1e68687da3387f7cd2976f75db5bc9ef3b8d4f not found: ID does not exist" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.386044 5017 scope.go:117] "RemoveContainer" containerID="2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.423577 5017 scope.go:117] "RemoveContainer" containerID="ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.451347 5017 scope.go:117] "RemoveContainer" containerID="2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc" Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.453186 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc\": container with ID starting with 2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc not found: ID does not exist" containerID="2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.453311 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc"} err="failed to get container status \"2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc\": rpc error: code = NotFound desc = could not find container \"2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc\": container with ID starting with 2a9e2a942d9da55b552383e09be825e15027ade185c5ac022578006979e399fc not found: ID does not exist" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.453358 5017 scope.go:117] "RemoveContainer" containerID="ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f" Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.454305 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f\": container with ID starting with ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f not found: ID does not exist" containerID="ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.454336 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f"} err="failed to get container status \"ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f\": rpc error: code = NotFound desc = could not find container \"ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f\": container with ID starting with ce129ba42a8b3c722fa18ae7dc9c5289605b6656c1a899e4045fa4189ae39e6f not found: ID does not exist" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.454350 5017 scope.go:117] "RemoveContainer" containerID="d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.456661 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b62t\" (UniqueName: \"kubernetes.io/projected/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-kube-api-access-7b62t\") pod \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.459628 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-combined-ca-bundle\") pod \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.459763 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-config-data\") pod \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\" (UID: \"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1\") " Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.460300 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.460322 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjktv\" (UniqueName: \"kubernetes.io/projected/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-kube-api-access-pjktv\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.460335 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.467847 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-kube-api-access-7b62t" (OuterVolumeSpecName: "kube-api-access-7b62t") pod "aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" (UID: "aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1"). InnerVolumeSpecName "kube-api-access-7b62t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.482125 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" (UID: "aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.484118 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-config-data" (OuterVolumeSpecName: "config-data") pod "aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" (UID: "aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.561564 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b62t\" (UniqueName: \"kubernetes.io/projected/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-kube-api-access-7b62t\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.561933 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.561946 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.585401 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.597949 5017 scope.go:117] "RemoveContainer" containerID="97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.606708 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.617468 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7dd895bb69-2ngwr"] Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.623931 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.624558 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.624896 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.624924 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.626589 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7dd895bb69-2ngwr"] Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.631719 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.633242 5017 scope.go:117] "RemoveContainer" containerID="d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e" Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.633284 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.637054 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e\": container with ID starting with d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e not found: ID does not exist" containerID="d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.637080 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e"} err="failed to get container status \"d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e\": rpc error: code = NotFound desc = could not find container \"d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e\": container with ID starting with d2d5a9a289484f054399954f789424326a15cf9d2c818d7edcc78f993bd9183e not found: ID does not exist" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.637102 5017 scope.go:117] "RemoveContainer" containerID="97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3" Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.638589 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3\": container with ID starting with 97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3 not found: ID does not exist" containerID="97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.638615 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3"} err="failed to get container status \"97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3\": rpc error: code = NotFound desc = could not find container \"97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3\": container with ID starting with 97c40c65ef865923cc204cdc5deb3e46aacb0a5552e0f4c854c437adc61e25b3 not found: ID does not exist" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.638631 5017 scope.go:117] "RemoveContainer" containerID="baa50265eb2c67dc53b4314533ab16055fd8bdaab7aec5147ecfb367c70320fc" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.639678 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.642049 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.643897 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:28 crc kubenswrapper[5017]: E0129 06:57:28.644020 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.715397 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="cc46a149-0256-4061-9e32-936b2ec12588" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: i/o timeout" Jan 29 06:57:28 crc kubenswrapper[5017]: I0129 06:57:28.758693 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5b9cd4b645-x8pg4" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9696/\": dial tcp 10.217.0.157:9696: connect: connection refused" Jan 29 06:57:29 crc kubenswrapper[5017]: I0129 06:57:29.307524 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1","Type":"ContainerDied","Data":"7bef3d53b020be2910231ae36677fcb80cae47c1e8a1f27f1cc054053fbdac74"} Jan 29 06:57:29 crc kubenswrapper[5017]: I0129 06:57:29.307601 5017 scope.go:117] "RemoveContainer" containerID="86c4c472b3c3903633b1b443d03a5262fb47cb7d8c1f6bdbf87f276659c9e547" Jan 29 06:57:29 crc kubenswrapper[5017]: I0129 06:57:29.308088 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 06:57:29 crc kubenswrapper[5017]: I0129 06:57:29.366181 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 06:57:29 crc kubenswrapper[5017]: I0129 06:57:29.375645 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 06:57:29 crc kubenswrapper[5017]: E0129 06:57:29.403278 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa7337b_7686_4fd2_9c52_6b76f9f3a3b1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa7337b_7686_4fd2_9c52_6b76f9f3a3b1.slice/crio-7bef3d53b020be2910231ae36677fcb80cae47c1e8a1f27f1cc054053fbdac74\": RecentStats: unable to find data in memory cache]" Jan 29 06:57:30 crc kubenswrapper[5017]: I0129 06:57:30.327232 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18edd5b3-27eb-43f3-8d6b-03490c243c78" path="/var/lib/kubelet/pods/18edd5b3-27eb-43f3-8d6b-03490c243c78/volumes" Jan 29 06:57:30 crc kubenswrapper[5017]: I0129 06:57:30.328103 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" path="/var/lib/kubelet/pods/3131ebc7-0955-4d4d-8444-057df1cc52f1/volumes" Jan 29 06:57:30 crc kubenswrapper[5017]: I0129 06:57:30.328773 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" path="/var/lib/kubelet/pods/a909c2d3-90a4-41a0-af8e-ddb69ed4f41b/volumes" Jan 29 06:57:30 crc kubenswrapper[5017]: I0129 06:57:30.329803 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" path="/var/lib/kubelet/pods/aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1/volumes" Jan 29 06:57:30 crc kubenswrapper[5017]: I0129 06:57:30.330851 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" path="/var/lib/kubelet/pods/c118297d-1c5d-4234-930c-9c0e6b5bb29b/volumes" Jan 29 06:57:30 crc kubenswrapper[5017]: I0129 06:57:30.331444 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da406cff-454a-4287-a409-5ad51c535649" path="/var/lib/kubelet/pods/da406cff-454a-4287-a409-5ad51c535649/volumes" Jan 29 06:57:33 crc kubenswrapper[5017]: E0129 06:57:33.624382 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:33 crc kubenswrapper[5017]: E0129 06:57:33.625380 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:33 crc kubenswrapper[5017]: E0129 06:57:33.625793 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:33 crc kubenswrapper[5017]: E0129 06:57:33.625832 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" Jan 29 06:57:33 crc kubenswrapper[5017]: E0129 06:57:33.626733 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:33 crc kubenswrapper[5017]: E0129 06:57:33.628988 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:33 crc kubenswrapper[5017]: E0129 06:57:33.634487 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:33 crc kubenswrapper[5017]: E0129 06:57:33.634612 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" Jan 29 06:57:34 crc kubenswrapper[5017]: I0129 06:57:34.977645 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.113091 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-config\") pod \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.113236 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t7ld\" (UniqueName: \"kubernetes.io/projected/c4fe6966-2467-4c3b-b907-d3a8e88eb497-kube-api-access-4t7ld\") pod \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.113536 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-public-tls-certs\") pod \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.113927 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-ovndb-tls-certs\") pod \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.114084 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-combined-ca-bundle\") pod \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.114890 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-internal-tls-certs\") pod \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.115065 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-httpd-config\") pod \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\" (UID: \"c4fe6966-2467-4c3b-b907-d3a8e88eb497\") " Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.126632 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fe6966-2467-4c3b-b907-d3a8e88eb497-kube-api-access-4t7ld" (OuterVolumeSpecName: "kube-api-access-4t7ld") pod "c4fe6966-2467-4c3b-b907-d3a8e88eb497" (UID: "c4fe6966-2467-4c3b-b907-d3a8e88eb497"). InnerVolumeSpecName "kube-api-access-4t7ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.127053 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c4fe6966-2467-4c3b-b907-d3a8e88eb497" (UID: "c4fe6966-2467-4c3b-b907-d3a8e88eb497"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.166601 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c4fe6966-2467-4c3b-b907-d3a8e88eb497" (UID: "c4fe6966-2467-4c3b-b907-d3a8e88eb497"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.172508 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-config" (OuterVolumeSpecName: "config") pod "c4fe6966-2467-4c3b-b907-d3a8e88eb497" (UID: "c4fe6966-2467-4c3b-b907-d3a8e88eb497"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.189072 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4fe6966-2467-4c3b-b907-d3a8e88eb497" (UID: "c4fe6966-2467-4c3b-b907-d3a8e88eb497"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.190534 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c4fe6966-2467-4c3b-b907-d3a8e88eb497" (UID: "c4fe6966-2467-4c3b-b907-d3a8e88eb497"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.200164 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c4fe6966-2467-4c3b-b907-d3a8e88eb497" (UID: "c4fe6966-2467-4c3b-b907-d3a8e88eb497"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.217156 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t7ld\" (UniqueName: \"kubernetes.io/projected/c4fe6966-2467-4c3b-b907-d3a8e88eb497-kube-api-access-4t7ld\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.217216 5017 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.217227 5017 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.217237 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.217248 5017 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.217258 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.217268 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4fe6966-2467-4c3b-b907-d3a8e88eb497-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.384210 5017 generic.go:334] "Generic (PLEG): container finished" podID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerID="f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6" exitCode=0 Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.384262 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9cd4b645-x8pg4" event={"ID":"c4fe6966-2467-4c3b-b907-d3a8e88eb497","Type":"ContainerDied","Data":"f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6"} Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.384299 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b9cd4b645-x8pg4" event={"ID":"c4fe6966-2467-4c3b-b907-d3a8e88eb497","Type":"ContainerDied","Data":"d0cacba71ede2c5bf64f44c7d63369cd1baf86505779cc7d41e57a8234c5da19"} Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.384321 5017 scope.go:117] "RemoveContainer" containerID="351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.384462 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b9cd4b645-x8pg4" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.428183 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b9cd4b645-x8pg4"] Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.432788 5017 scope.go:117] "RemoveContainer" containerID="f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.441004 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5b9cd4b645-x8pg4"] Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.464450 5017 scope.go:117] "RemoveContainer" containerID="351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b" Jan 29 06:57:35 crc kubenswrapper[5017]: E0129 06:57:35.465406 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b\": container with ID starting with 351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b not found: ID does not exist" containerID="351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.465469 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b"} err="failed to get container status \"351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b\": rpc error: code = NotFound desc = could not find container \"351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b\": container with ID starting with 351bf740841608e8a491520445ac2fda75daae4f0798aac354283b20b04c8d8b not found: ID does not exist" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.465506 5017 scope.go:117] "RemoveContainer" containerID="f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6" Jan 29 06:57:35 crc kubenswrapper[5017]: E0129 06:57:35.466117 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6\": container with ID starting with f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6 not found: ID does not exist" containerID="f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6" Jan 29 06:57:35 crc kubenswrapper[5017]: I0129 06:57:35.466150 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6"} err="failed to get container status \"f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6\": rpc error: code = NotFound desc = could not find container \"f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6\": container with ID starting with f39df6f24b1e86a7986bd4e56e3006d14c28347df43c84bc5c98621eef1720f6 not found: ID does not exist" Jan 29 06:57:36 crc kubenswrapper[5017]: I0129 06:57:36.329812 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" path="/var/lib/kubelet/pods/c4fe6966-2467-4c3b-b907-d3a8e88eb497/volumes" Jan 29 06:57:38 crc kubenswrapper[5017]: E0129 06:57:38.624747 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:38 crc kubenswrapper[5017]: E0129 06:57:38.626773 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:38 crc kubenswrapper[5017]: E0129 06:57:38.626804 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:38 crc kubenswrapper[5017]: E0129 06:57:38.628146 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:38 crc kubenswrapper[5017]: E0129 06:57:38.628194 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" Jan 29 06:57:38 crc kubenswrapper[5017]: E0129 06:57:38.630527 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:38 crc kubenswrapper[5017]: E0129 06:57:38.632758 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:38 crc kubenswrapper[5017]: E0129 06:57:38.632857 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" Jan 29 06:57:43 crc kubenswrapper[5017]: E0129 06:57:43.625187 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:43 crc kubenswrapper[5017]: E0129 06:57:43.626647 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:43 crc kubenswrapper[5017]: E0129 06:57:43.627387 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:43 crc kubenswrapper[5017]: E0129 06:57:43.627480 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" Jan 29 06:57:43 crc kubenswrapper[5017]: E0129 06:57:43.627900 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:43 crc kubenswrapper[5017]: E0129 06:57:43.629528 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:43 crc kubenswrapper[5017]: E0129 06:57:43.631925 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:43 crc kubenswrapper[5017]: E0129 06:57:43.631995 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.553316 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mrhnf_131b99f7-3558-4aec-a0bf-7c1ef0f35a2b/ovs-vswitchd/0.log" Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.556745 5017 generic.go:334] "Generic (PLEG): container finished" podID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" exitCode=137 Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.556789 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mrhnf" event={"ID":"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b","Type":"ContainerDied","Data":"f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1"} Jan 29 06:57:48 crc kubenswrapper[5017]: E0129 06:57:48.624808 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:48 crc kubenswrapper[5017]: E0129 06:57:48.624810 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1 is running failed: container process not found" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:48 crc kubenswrapper[5017]: E0129 06:57:48.625180 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1 is running failed: container process not found" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:48 crc kubenswrapper[5017]: E0129 06:57:48.625357 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:48 crc kubenswrapper[5017]: E0129 06:57:48.625446 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1 is running failed: container process not found" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 06:57:48 crc kubenswrapper[5017]: E0129 06:57:48.625470 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" Jan 29 06:57:48 crc kubenswrapper[5017]: E0129 06:57:48.626069 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 06:57:48 crc kubenswrapper[5017]: E0129 06:57:48.626141 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-mrhnf" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.919236 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mrhnf_131b99f7-3558-4aec-a0bf-7c1ef0f35a2b/ovs-vswitchd/0.log" Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.921346 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.994648 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89q2r\" (UniqueName: \"kubernetes.io/projected/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-kube-api-access-89q2r\") pod \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.994766 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-scripts\") pod \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.994795 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-lib\") pod \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.994879 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-log\") pod \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.994924 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-run\") pod \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.994992 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-etc-ovs\") pod \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\" (UID: \"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b\") " Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.995337 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" (UID: "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.995921 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-log" (OuterVolumeSpecName: "var-log") pod "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" (UID: "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.996029 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-lib" (OuterVolumeSpecName: "var-lib") pod "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" (UID: "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.996056 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-run" (OuterVolumeSpecName: "var-run") pod "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" (UID: "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:57:48 crc kubenswrapper[5017]: I0129 06:57:48.996543 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-scripts" (OuterVolumeSpecName: "scripts") pod "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" (UID: "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.006149 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-kube-api-access-89q2r" (OuterVolumeSpecName: "kube-api-access-89q2r") pod "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" (UID: "131b99f7-3558-4aec-a0bf-7c1ef0f35a2b"). InnerVolumeSpecName "kube-api-access-89q2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.097271 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.097327 5017 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-lib\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.097340 5017 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.097352 5017 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.097364 5017 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.097377 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89q2r\" (UniqueName: \"kubernetes.io/projected/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b-kube-api-access-89q2r\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.414535 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.504652 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-cache\") pod \"6d082326-495c-4078-974e-714379243884\" (UID: \"6d082326-495c-4078-974e-714379243884\") " Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.504917 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") pod \"6d082326-495c-4078-974e-714379243884\" (UID: \"6d082326-495c-4078-974e-714379243884\") " Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.505029 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"6d082326-495c-4078-974e-714379243884\" (UID: \"6d082326-495c-4078-974e-714379243884\") " Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.505111 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-lock\") pod \"6d082326-495c-4078-974e-714379243884\" (UID: \"6d082326-495c-4078-974e-714379243884\") " Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.505177 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqwrd\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-kube-api-access-nqwrd\") pod \"6d082326-495c-4078-974e-714379243884\" (UID: \"6d082326-495c-4078-974e-714379243884\") " Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.505240 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d082326-495c-4078-974e-714379243884-combined-ca-bundle\") pod \"6d082326-495c-4078-974e-714379243884\" (UID: \"6d082326-495c-4078-974e-714379243884\") " Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.505822 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-cache" (OuterVolumeSpecName: "cache") pod "6d082326-495c-4078-974e-714379243884" (UID: "6d082326-495c-4078-974e-714379243884"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.506401 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-lock" (OuterVolumeSpecName: "lock") pod "6d082326-495c-4078-974e-714379243884" (UID: "6d082326-495c-4078-974e-714379243884"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.510225 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-kube-api-access-nqwrd" (OuterVolumeSpecName: "kube-api-access-nqwrd") pod "6d082326-495c-4078-974e-714379243884" (UID: "6d082326-495c-4078-974e-714379243884"). InnerVolumeSpecName "kube-api-access-nqwrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.512443 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "6d082326-495c-4078-974e-714379243884" (UID: "6d082326-495c-4078-974e-714379243884"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.515193 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6d082326-495c-4078-974e-714379243884" (UID: "6d082326-495c-4078-974e-714379243884"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.577666 5017 generic.go:334] "Generic (PLEG): container finished" podID="6d082326-495c-4078-974e-714379243884" containerID="a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345" exitCode=137 Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.577774 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345"} Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.578053 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6d082326-495c-4078-974e-714379243884","Type":"ContainerDied","Data":"77205f1116f100fd849b9e7c1d3100fdca7c6527b145fa74621d2a15a78e0d5d"} Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.578091 5017 scope.go:117] "RemoveContainer" containerID="a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.578377 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.586567 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mrhnf_131b99f7-3558-4aec-a0bf-7c1ef0f35a2b/ovs-vswitchd/0.log" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.587699 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mrhnf" event={"ID":"131b99f7-3558-4aec-a0bf-7c1ef0f35a2b","Type":"ContainerDied","Data":"adb8db4c014230916e54c16802b5b3256625e42c4eae64360e39d804ad5a1d3c"} Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.587835 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mrhnf" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.607760 5017 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-lock\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.607797 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqwrd\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-kube-api-access-nqwrd\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.607809 5017 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6d082326-495c-4078-974e-714379243884-cache\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.607820 5017 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d082326-495c-4078-974e-714379243884-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.607857 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.626089 5017 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.660270 5017 scope.go:117] "RemoveContainer" containerID="eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.676787 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-mrhnf"] Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.685932 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-mrhnf"] Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.689635 5017 scope.go:117] "RemoveContainer" containerID="bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.710448 5017 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.711564 5017 scope.go:117] "RemoveContainer" containerID="1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.732483 5017 scope.go:117] "RemoveContainer" containerID="54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.757276 5017 scope.go:117] "RemoveContainer" containerID="b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.779306 5017 scope.go:117] "RemoveContainer" containerID="ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.804463 5017 scope.go:117] "RemoveContainer" containerID="3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.828913 5017 scope.go:117] "RemoveContainer" containerID="fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.850248 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d082326-495c-4078-974e-714379243884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d082326-495c-4078-974e-714379243884" (UID: "6d082326-495c-4078-974e-714379243884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.858101 5017 scope.go:117] "RemoveContainer" containerID="c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.886620 5017 scope.go:117] "RemoveContainer" containerID="18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.912540 5017 scope.go:117] "RemoveContainer" containerID="06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.913884 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d082326-495c-4078-974e-714379243884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.920385 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.926236 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.938215 5017 scope.go:117] "RemoveContainer" containerID="999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.961668 5017 scope.go:117] "RemoveContainer" containerID="51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc" Jan 29 06:57:49 crc kubenswrapper[5017]: I0129 06:57:49.981701 5017 scope.go:117] "RemoveContainer" containerID="78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.000508 5017 scope.go:117] "RemoveContainer" containerID="a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.001227 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345\": container with ID starting with a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345 not found: ID does not exist" containerID="a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.001275 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345"} err="failed to get container status \"a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345\": rpc error: code = NotFound desc = could not find container \"a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345\": container with ID starting with a2aa1b791b58fdf107ef7908d8702f09965cf473d76f85f3f451236315223345 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.001305 5017 scope.go:117] "RemoveContainer" containerID="eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.001699 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45\": container with ID starting with eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45 not found: ID does not exist" containerID="eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.001762 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45"} err="failed to get container status \"eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45\": rpc error: code = NotFound desc = could not find container \"eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45\": container with ID starting with eea4192399327817cb3d1432e2e02524e5b714a0b135c1bb086ec0b7a261cb45 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.001801 5017 scope.go:117] "RemoveContainer" containerID="bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.002439 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684\": container with ID starting with bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684 not found: ID does not exist" containerID="bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.002474 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684"} err="failed to get container status \"bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684\": rpc error: code = NotFound desc = could not find container \"bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684\": container with ID starting with bde85bd88a643b945fa8485a5e4e6b486a320b6804c92a8b2d9a9aa7bebed684 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.002492 5017 scope.go:117] "RemoveContainer" containerID="1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.002788 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0\": container with ID starting with 1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0 not found: ID does not exist" containerID="1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.002824 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0"} err="failed to get container status \"1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0\": rpc error: code = NotFound desc = could not find container \"1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0\": container with ID starting with 1a932253de89bf41d237a07b2d680b87b0b80d0fe5212e5b251a5b4e577d88f0 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.002845 5017 scope.go:117] "RemoveContainer" containerID="54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.003129 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1\": container with ID starting with 54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1 not found: ID does not exist" containerID="54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.003152 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1"} err="failed to get container status \"54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1\": rpc error: code = NotFound desc = could not find container \"54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1\": container with ID starting with 54137e09fc7407811fd958b7fb19f8b5d07304823df63ae17c627a6330f09db1 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.003165 5017 scope.go:117] "RemoveContainer" containerID="b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.003615 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39\": container with ID starting with b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39 not found: ID does not exist" containerID="b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.003672 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39"} err="failed to get container status \"b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39\": rpc error: code = NotFound desc = could not find container \"b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39\": container with ID starting with b9a4e6b5332e31b3b4226cbbd2d49f4c6e57429f3879656243b9be909902ea39 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.003688 5017 scope.go:117] "RemoveContainer" containerID="ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.004005 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40\": container with ID starting with ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40 not found: ID does not exist" containerID="ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.004034 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40"} err="failed to get container status \"ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40\": rpc error: code = NotFound desc = could not find container \"ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40\": container with ID starting with ffd302baed269f819a892350ed39e7c2a2d614097fd561de04df743407c07d40 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.004053 5017 scope.go:117] "RemoveContainer" containerID="3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.004317 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c\": container with ID starting with 3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c not found: ID does not exist" containerID="3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.004342 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c"} err="failed to get container status \"3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c\": rpc error: code = NotFound desc = could not find container \"3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c\": container with ID starting with 3f1da0c6b7bd9861d571d1a8d0e937d175d1e5c1f0e245d87e1db566ec6c9a8c not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.004357 5017 scope.go:117] "RemoveContainer" containerID="fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.004605 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35\": container with ID starting with fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35 not found: ID does not exist" containerID="fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.004630 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35"} err="failed to get container status \"fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35\": rpc error: code = NotFound desc = could not find container \"fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35\": container with ID starting with fd46ff39c6745d2a3794eec673948c7c169964d18cc7c9ac4b9f4a6f60045a35 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.004648 5017 scope.go:117] "RemoveContainer" containerID="c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.004897 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb\": container with ID starting with c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb not found: ID does not exist" containerID="c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.004945 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb"} err="failed to get container status \"c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb\": rpc error: code = NotFound desc = could not find container \"c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb\": container with ID starting with c652a43e8305c1a9910a6bd5b5ee06ee2fb25e0723a0ebf769650ee3997e0aeb not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.004975 5017 scope.go:117] "RemoveContainer" containerID="18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.005203 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e\": container with ID starting with 18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e not found: ID does not exist" containerID="18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.005228 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e"} err="failed to get container status \"18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e\": rpc error: code = NotFound desc = could not find container \"18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e\": container with ID starting with 18d811ebf9ab9b398503916cdeff50dc63ec6ab67d8237d525beefff9dcb167e not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.005244 5017 scope.go:117] "RemoveContainer" containerID="06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.005579 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02\": container with ID starting with 06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02 not found: ID does not exist" containerID="06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.005602 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02"} err="failed to get container status \"06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02\": rpc error: code = NotFound desc = could not find container \"06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02\": container with ID starting with 06cfdaa53a75f84fd7f6330f5db9c5c157e9d48c62c75510196e0abae551fc02 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.005615 5017 scope.go:117] "RemoveContainer" containerID="999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.005834 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea\": container with ID starting with 999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea not found: ID does not exist" containerID="999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.005863 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea"} err="failed to get container status \"999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea\": rpc error: code = NotFound desc = could not find container \"999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea\": container with ID starting with 999b2815ecbfda6809422021c729d05da84256a8ca21f081cadf4bf9e49655ea not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.005882 5017 scope.go:117] "RemoveContainer" containerID="51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.006209 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc\": container with ID starting with 51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc not found: ID does not exist" containerID="51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.006246 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc"} err="failed to get container status \"51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc\": rpc error: code = NotFound desc = could not find container \"51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc\": container with ID starting with 51ab0b4a34c76b32e8b560b2df9c2b6bedc67ec5c4d63d10fdd36cb7f8ef1edc not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.006267 5017 scope.go:117] "RemoveContainer" containerID="78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.007206 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237\": container with ID starting with 78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237 not found: ID does not exist" containerID="78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.007237 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237"} err="failed to get container status \"78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237\": rpc error: code = NotFound desc = could not find container \"78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237\": container with ID starting with 78c56e210bdc11cf82c13cdc564f8b588d7f490c0c73661ab691a65e75fe9237 not found: ID does not exist" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.007254 5017 scope.go:117] "RemoveContainer" containerID="f3ad8c4f20bf955fbc05781f196725d983e26995a8d00c19f11477369f5a08b1" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.027120 5017 scope.go:117] "RemoveContainer" containerID="1e3d13255a0abc9a24e35788246958c76121b82d0582fabb929e39b38e4591aa" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.046311 5017 scope.go:117] "RemoveContainer" containerID="4ab1a2f79367075930dce611800526649c246682eb93e1f1f2b26e57e36b8756" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.327706 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" path="/var/lib/kubelet/pods/131b99f7-3558-4aec-a0bf-7c1ef0f35a2b/volumes" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.328756 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d082326-495c-4078-974e-714379243884" path="/var/lib/kubelet/pods/6d082326-495c-4078-974e-714379243884/volumes" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.585626 5017 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podc653a323-c9c2-42f2-a2af-125828234475"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podc653a323-c9c2-42f2-a2af-125828234475] : Timed out while waiting for systemd to remove kubepods-besteffort-podc653a323_c9c2_42f2_a2af_125828234475.slice" Jan 29 06:57:50 crc kubenswrapper[5017]: E0129 06:57:50.585700 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podc653a323-c9c2-42f2-a2af-125828234475] : unable to destroy cgroup paths for cgroup [kubepods besteffort podc653a323-c9c2-42f2-a2af-125828234475] : Timed out while waiting for systemd to remove kubepods-besteffort-podc653a323_c9c2_42f2_a2af_125828234475.slice" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" podUID="c653a323-c9c2-42f2-a2af-125828234475" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.605506 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-cxtdk" Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.627295 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-cxtdk"] Jan 29 06:57:50 crc kubenswrapper[5017]: I0129 06:57:50.631865 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-cxtdk"] Jan 29 06:57:52 crc kubenswrapper[5017]: I0129 06:57:52.328826 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c653a323-c9c2-42f2-a2af-125828234475" path="/var/lib/kubelet/pods/c653a323-c9c2-42f2-a2af-125828234475/volumes" Jan 29 06:57:52 crc kubenswrapper[5017]: I0129 06:57:52.944381 5017 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod615b2757-5eab-4454-95da-663755846932"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod615b2757-5eab-4454-95da-663755846932] : Timed out while waiting for systemd to remove kubepods-besteffort-pod615b2757_5eab_4454_95da_663755846932.slice" Jan 29 06:57:53 crc kubenswrapper[5017]: I0129 06:57:53.343473 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.173:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 06:57:53 crc kubenswrapper[5017]: I0129 06:57:53.343503 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.173:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 06:57:56 crc kubenswrapper[5017]: I0129 06:57:56.539436 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:57:56 crc kubenswrapper[5017]: I0129 06:57:56.540073 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:57:56 crc kubenswrapper[5017]: I0129 06:57:56.540147 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 06:57:56 crc kubenswrapper[5017]: I0129 06:57:56.541057 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0fa6e2a79db70c1d184fc860ee5a35d194bde9485b97eabb60341d87278b250"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:57:56 crc kubenswrapper[5017]: I0129 06:57:56.541129 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://f0fa6e2a79db70c1d184fc860ee5a35d194bde9485b97eabb60341d87278b250" gracePeriod=600 Jan 29 06:57:57 crc kubenswrapper[5017]: I0129 06:57:57.680621 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="f0fa6e2a79db70c1d184fc860ee5a35d194bde9485b97eabb60341d87278b250" exitCode=0 Jan 29 06:57:57 crc kubenswrapper[5017]: I0129 06:57:57.680650 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"f0fa6e2a79db70c1d184fc860ee5a35d194bde9485b97eabb60341d87278b250"} Jan 29 06:57:57 crc kubenswrapper[5017]: I0129 06:57:57.681628 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab"} Jan 29 06:57:57 crc kubenswrapper[5017]: I0129 06:57:57.681666 5017 scope.go:117] "RemoveContainer" containerID="70dfeea0251012308950e213d0ab72466a324bea818357cd6a2957c1747ca4d2" Jan 29 06:58:18 crc kubenswrapper[5017]: I0129 06:58:18.366455 5017 scope.go:117] "RemoveContainer" containerID="7c791894b1734b0ef6f635f2bdcbd5ede8f7115df23c5068ed2fb5212e72b15f" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.487973 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-75gzx"] Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.493329 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d2d70d-8578-47fc-a3a7-df7694c3f2a3" containerName="keystone-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.493365 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d2d70d-8578-47fc-a3a7-df7694c3f2a3" containerName="keystone-api" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.493403 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.493427 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.493532 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="swift-recon-cron" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.493556 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="swift-recon-cron" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.493588 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="proxy-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.493599 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="proxy-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.493621 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerName="barbican-worker" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.493852 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerName="barbican-worker" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.493877 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerName="placement-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494118 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerName="placement-log" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494141 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30b013f-453f-4282-8b22-2a5270027828" containerName="setup-container" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494154 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30b013f-453f-4282-8b22-2a5270027828" containerName="setup-container" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494170 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494181 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api-log" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494200 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494211 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494225 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-updater" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494236 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-updater" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494253 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="rsync" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494268 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="rsync" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494284 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494299 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-log" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494318 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerName="glance-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494328 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerName="glance-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494343 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" containerName="mariadb-account-create-update" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494352 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" containerName="mariadb-account-create-update" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494369 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerName="neutron-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494380 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerName="neutron-api" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494397 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-server" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494409 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-server" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494426 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-expirer" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494475 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-expirer" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494507 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerName="setup-container" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494519 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerName="setup-container" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494586 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30b013f-453f-4282-8b22-2a5270027828" containerName="rabbitmq" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494615 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30b013f-453f-4282-8b22-2a5270027828" containerName="rabbitmq" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494629 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc46a149-0256-4061-9e32-936b2ec12588" containerName="memcached" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494640 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc46a149-0256-4061-9e32-936b2ec12588" containerName="memcached" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494658 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerName="glance-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494686 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerName="glance-log" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494721 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18edd5b3-27eb-43f3-8d6b-03490c243c78" containerName="nova-cell1-conductor-conductor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494747 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="18edd5b3-27eb-43f3-8d6b-03490c243c78" containerName="nova-cell1-conductor-conductor" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494773 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494796 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-log" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494826 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-reaper" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494834 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-reaper" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494869 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerName="placement-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494878 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerName="placement-api" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494892 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af88cca-e43b-483d-beae-d6a56940aff7" containerName="galera" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494899 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af88cca-e43b-483d-beae-d6a56940aff7" containerName="galera" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494915 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerName="barbican-worker-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494923 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerName="barbican-worker-log" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494948 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-auditor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.494985 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-auditor" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.494998 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af88cca-e43b-483d-beae-d6a56940aff7" containerName="mysql-bootstrap" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.495019 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af88cca-e43b-483d-beae-d6a56940aff7" containerName="mysql-bootstrap" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.495035 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-server" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.495076 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-server" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.495108 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.495134 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.495147 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-replicator" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.495244 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-replicator" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.495331 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c57c864-37e8-46b9-b30d-1762f3858984" containerName="ovn-controller" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.495345 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c57c864-37e8-46b9-b30d-1762f3858984" containerName="ovn-controller" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.495429 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-metadata" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.495526 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-metadata" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.495617 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-updater" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.495687 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-updater" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.495794 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server-init" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.495830 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server-init" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.495894 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.495924 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-api" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496009 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" containerName="cinder-api-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496037 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" containerName="cinder-api-log" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496077 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerName="probe" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496092 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerName="probe" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496126 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="sg-core" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496154 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="sg-core" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496187 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-replicator" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496219 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-replicator" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496265 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerName="neutron-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496292 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerName="neutron-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496305 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496317 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-log" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496330 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" containerName="nova-cell0-conductor-conductor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496359 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" containerName="nova-cell0-conductor-conductor" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496395 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da406cff-454a-4287-a409-5ad51c535649" containerName="barbican-keystone-listener-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496421 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="da406cff-454a-4287-a409-5ad51c535649" containerName="barbican-keystone-listener-log" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496449 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" containerName="nova-scheduler-scheduler" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496479 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" containerName="nova-scheduler-scheduler" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496530 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ec780f-f6cc-4d8d-be76-f517dff0673c" containerName="kube-state-metrics" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496542 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ec780f-f6cc-4d8d-be76-f517dff0673c" containerName="kube-state-metrics" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496555 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="ceilometer-notification-agent" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496584 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="ceilometer-notification-agent" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496613 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-replicator" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496639 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-replicator" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496670 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-server" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496679 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-server" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496703 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-auditor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496724 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-auditor" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496746 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496756 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496769 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerName="cinder-scheduler" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496778 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerName="cinder-scheduler" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496814 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerName="rabbitmq" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496842 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerName="rabbitmq" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496852 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" containerName="cinder-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496860 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" containerName="cinder-api" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496876 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-auditor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496906 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-auditor" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496919 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="ceilometer-central-agent" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496928 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="ceilometer-central-agent" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.496943 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da406cff-454a-4287-a409-5ad51c535649" containerName="barbican-keystone-listener" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.496951 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="da406cff-454a-4287-a409-5ad51c535649" containerName="barbican-keystone-listener" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497158 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="rsync" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497176 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="ceilometer-central-agent" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497190 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f660da1-1d97-4b3b-ae3f-fb7ee90bf25a" containerName="rabbitmq" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497217 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497246 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-auditor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497283 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerName="barbican-worker-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497294 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="18edd5b3-27eb-43f3-8d6b-03490c243c78" containerName="nova-cell1-conductor-conductor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497320 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-replicator" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497331 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerName="cinder-scheduler" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497340 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-server" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497348 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-expirer" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497362 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" containerName="cinder-api-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497390 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30b013f-453f-4282-8b22-2a5270027828" containerName="rabbitmq" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497418 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" containerName="mariadb-account-create-update" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497429 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerName="glance-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497453 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="ceilometer-notification-agent" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497464 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-server" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497487 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-updater" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497510 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ec780f-f6cc-4d8d-be76-f517dff0673c" containerName="kube-state-metrics" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497538 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d2d70d-8578-47fc-a3a7-df7694c3f2a3" containerName="keystone-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497570 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-replicator" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497590 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-auditor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497604 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-metadata" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497651 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerName="neutron-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497683 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerName="placement-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497697 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="swift-recon-cron" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497737 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" containerName="mariadb-account-create-update" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497768 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-server" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497803 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa7337b-7686-4fd2-9c52-6b76f9f3a3b1" containerName="nova-cell0-conductor-conductor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497815 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af88cca-e43b-483d-beae-d6a56940aff7" containerName="galera" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497831 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-auditor" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497859 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovsdb-server" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497891 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="account-reaper" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497923 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="131b99f7-3558-4aec-a0bf-7c1ef0f35a2b" containerName="ovs-vswitchd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497975 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc46a149-0256-4061-9e32-936b2ec12588" containerName="memcached" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.497989 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df7814f-338e-40fb-95aa-f93dfa8307d6" containerName="glance-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498000 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41c27f8-0c27-4e3d-83b1-62a61abb4faf" containerName="glance-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498011 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="proxy-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498021 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c118297d-1c5d-4234-930c-9c0e6b5bb29b" containerName="barbican-worker" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498033 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3131ebc7-0955-4d4d-8444-057df1cc52f1" containerName="sg-core" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498080 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d94b8e3-f4a6-4fc2-af59-57b33254cd74" containerName="nova-metadata-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498094 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc01ff67-baeb-47d1-90f5-9cff65c9dffa" containerName="placement-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498121 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fe6966-2467-4c3b-b907-d3a8e88eb497" containerName="neutron-httpd" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498137 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498151 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="object-replicator" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498182 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="da406cff-454a-4287-a409-5ad51c535649" containerName="barbican-keystone-listener-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498210 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c57c864-37e8-46b9-b30d-1762f3858984" containerName="ovn-controller" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498240 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498269 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d082326-495c-4078-974e-714379243884" containerName="container-updater" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498296 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="71f6aede-754b-476f-8082-78f0e50b6a39" containerName="cinder-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498323 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="919074d0-f7a7-4d64-8339-744730688c4f" containerName="barbican-api" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498352 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3e4a4d-ee9a-4345-b8e5-a40416771caf" containerName="nova-api-log" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498367 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a909c2d3-90a4-41a0-af8e-ddb69ed4f41b" containerName="nova-scheduler-scheduler" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498382 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c69fc6f-43e9-4fe5-b964-8db89e6ab354" containerName="probe" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498395 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="da406cff-454a-4287-a409-5ad51c535649" containerName="barbican-keystone-listener" Jan 29 06:58:20 crc kubenswrapper[5017]: E0129 06:58:20.498866 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" containerName="mariadb-account-create-update" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.498894 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf19cd1-b93c-449d-ba04-7fecd2ab65e2" containerName="mariadb-account-create-update" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.501430 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.513351 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75gzx"] Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.642043 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n86h4\" (UniqueName: \"kubernetes.io/projected/485aaca4-79c9-43a6-890f-027930320d48-kube-api-access-n86h4\") pod \"redhat-operators-75gzx\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.642106 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-catalog-content\") pod \"redhat-operators-75gzx\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.642140 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-utilities\") pod \"redhat-operators-75gzx\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.744176 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-catalog-content\") pod \"redhat-operators-75gzx\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.744584 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-utilities\") pod \"redhat-operators-75gzx\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.744767 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n86h4\" (UniqueName: \"kubernetes.io/projected/485aaca4-79c9-43a6-890f-027930320d48-kube-api-access-n86h4\") pod \"redhat-operators-75gzx\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.745095 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-catalog-content\") pod \"redhat-operators-75gzx\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.745095 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-utilities\") pod \"redhat-operators-75gzx\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.770195 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n86h4\" (UniqueName: \"kubernetes.io/projected/485aaca4-79c9-43a6-890f-027930320d48-kube-api-access-n86h4\") pod \"redhat-operators-75gzx\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:20 crc kubenswrapper[5017]: I0129 06:58:20.842427 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:21 crc kubenswrapper[5017]: I0129 06:58:21.110617 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75gzx"] Jan 29 06:58:21 crc kubenswrapper[5017]: I0129 06:58:21.997453 5017 generic.go:334] "Generic (PLEG): container finished" podID="485aaca4-79c9-43a6-890f-027930320d48" containerID="12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3" exitCode=0 Jan 29 06:58:21 crc kubenswrapper[5017]: I0129 06:58:21.997541 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gzx" event={"ID":"485aaca4-79c9-43a6-890f-027930320d48","Type":"ContainerDied","Data":"12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3"} Jan 29 06:58:21 crc kubenswrapper[5017]: I0129 06:58:21.998824 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gzx" event={"ID":"485aaca4-79c9-43a6-890f-027930320d48","Type":"ContainerStarted","Data":"9122fad2186d6cfd83a8508cd66dc659a3e91d9ea5b8582a34f7e3b0de0fe38f"} Jan 29 06:58:22 crc kubenswrapper[5017]: I0129 06:58:21.999983 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 06:58:23 crc kubenswrapper[5017]: I0129 06:58:23.032593 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gzx" event={"ID":"485aaca4-79c9-43a6-890f-027930320d48","Type":"ContainerStarted","Data":"2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90"} Jan 29 06:58:24 crc kubenswrapper[5017]: I0129 06:58:24.045619 5017 generic.go:334] "Generic (PLEG): container finished" podID="485aaca4-79c9-43a6-890f-027930320d48" containerID="2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90" exitCode=0 Jan 29 06:58:24 crc kubenswrapper[5017]: I0129 06:58:24.045705 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gzx" event={"ID":"485aaca4-79c9-43a6-890f-027930320d48","Type":"ContainerDied","Data":"2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90"} Jan 29 06:58:25 crc kubenswrapper[5017]: I0129 06:58:25.058654 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gzx" event={"ID":"485aaca4-79c9-43a6-890f-027930320d48","Type":"ContainerStarted","Data":"debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674"} Jan 29 06:58:25 crc kubenswrapper[5017]: I0129 06:58:25.084539 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-75gzx" podStartSLOduration=2.6389393 podStartE2EDuration="5.084519044s" podCreationTimestamp="2026-01-29 06:58:20 +0000 UTC" firstStartedPulling="2026-01-29 06:58:21.999712041 +0000 UTC m=+1388.374159651" lastFinishedPulling="2026-01-29 06:58:24.445291775 +0000 UTC m=+1390.819739395" observedRunningTime="2026-01-29 06:58:25.079902924 +0000 UTC m=+1391.454350574" watchObservedRunningTime="2026-01-29 06:58:25.084519044 +0000 UTC m=+1391.458966654" Jan 29 06:58:30 crc kubenswrapper[5017]: I0129 06:58:30.842585 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:30 crc kubenswrapper[5017]: I0129 06:58:30.843253 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:31 crc kubenswrapper[5017]: I0129 06:58:31.902050 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-75gzx" podUID="485aaca4-79c9-43a6-890f-027930320d48" containerName="registry-server" probeResult="failure" output=< Jan 29 06:58:31 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 06:58:31 crc kubenswrapper[5017]: > Jan 29 06:58:40 crc kubenswrapper[5017]: I0129 06:58:40.907058 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:40 crc kubenswrapper[5017]: I0129 06:58:40.962544 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:41 crc kubenswrapper[5017]: I0129 06:58:41.154192 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75gzx"] Jan 29 06:58:42 crc kubenswrapper[5017]: I0129 06:58:42.226419 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-75gzx" podUID="485aaca4-79c9-43a6-890f-027930320d48" containerName="registry-server" containerID="cri-o://debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674" gracePeriod=2 Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.198738 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.237143 5017 generic.go:334] "Generic (PLEG): container finished" podID="485aaca4-79c9-43a6-890f-027930320d48" containerID="debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674" exitCode=0 Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.237228 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75gzx" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.237232 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gzx" event={"ID":"485aaca4-79c9-43a6-890f-027930320d48","Type":"ContainerDied","Data":"debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674"} Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.237853 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gzx" event={"ID":"485aaca4-79c9-43a6-890f-027930320d48","Type":"ContainerDied","Data":"9122fad2186d6cfd83a8508cd66dc659a3e91d9ea5b8582a34f7e3b0de0fe38f"} Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.237901 5017 scope.go:117] "RemoveContainer" containerID="debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.256113 5017 scope.go:117] "RemoveContainer" containerID="2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.273823 5017 scope.go:117] "RemoveContainer" containerID="12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.303364 5017 scope.go:117] "RemoveContainer" containerID="debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674" Jan 29 06:58:43 crc kubenswrapper[5017]: E0129 06:58:43.303848 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674\": container with ID starting with debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674 not found: ID does not exist" containerID="debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.303913 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674"} err="failed to get container status \"debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674\": rpc error: code = NotFound desc = could not find container \"debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674\": container with ID starting with debe5032b453f98cf46ac7f16354202f933989f5800354a61d4a5c71c48d5674 not found: ID does not exist" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.303996 5017 scope.go:117] "RemoveContainer" containerID="2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90" Jan 29 06:58:43 crc kubenswrapper[5017]: E0129 06:58:43.306121 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90\": container with ID starting with 2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90 not found: ID does not exist" containerID="2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.306194 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90"} err="failed to get container status \"2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90\": rpc error: code = NotFound desc = could not find container \"2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90\": container with ID starting with 2012612f965f67f4adc9969822cc4628d236abda663ed5799a22b1b351a45f90 not found: ID does not exist" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.306251 5017 scope.go:117] "RemoveContainer" containerID="12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3" Jan 29 06:58:43 crc kubenswrapper[5017]: E0129 06:58:43.306511 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3\": container with ID starting with 12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3 not found: ID does not exist" containerID="12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.306539 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3"} err="failed to get container status \"12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3\": rpc error: code = NotFound desc = could not find container \"12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3\": container with ID starting with 12fc5654c771fe4bf15c3060228e0c01b1f180ac16dbab51b1820586740b5ea3 not found: ID does not exist" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.324304 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-utilities\") pod \"485aaca4-79c9-43a6-890f-027930320d48\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.324422 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n86h4\" (UniqueName: \"kubernetes.io/projected/485aaca4-79c9-43a6-890f-027930320d48-kube-api-access-n86h4\") pod \"485aaca4-79c9-43a6-890f-027930320d48\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.324511 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-catalog-content\") pod \"485aaca4-79c9-43a6-890f-027930320d48\" (UID: \"485aaca4-79c9-43a6-890f-027930320d48\") " Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.326378 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-utilities" (OuterVolumeSpecName: "utilities") pod "485aaca4-79c9-43a6-890f-027930320d48" (UID: "485aaca4-79c9-43a6-890f-027930320d48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.332904 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485aaca4-79c9-43a6-890f-027930320d48-kube-api-access-n86h4" (OuterVolumeSpecName: "kube-api-access-n86h4") pod "485aaca4-79c9-43a6-890f-027930320d48" (UID: "485aaca4-79c9-43a6-890f-027930320d48"). InnerVolumeSpecName "kube-api-access-n86h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.426565 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.426620 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n86h4\" (UniqueName: \"kubernetes.io/projected/485aaca4-79c9-43a6-890f-027930320d48-kube-api-access-n86h4\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.497330 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "485aaca4-79c9-43a6-890f-027930320d48" (UID: "485aaca4-79c9-43a6-890f-027930320d48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.527884 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485aaca4-79c9-43a6-890f-027930320d48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.582936 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75gzx"] Jan 29 06:58:43 crc kubenswrapper[5017]: I0129 06:58:43.594903 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-75gzx"] Jan 29 06:58:44 crc kubenswrapper[5017]: I0129 06:58:44.327738 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485aaca4-79c9-43a6-890f-027930320d48" path="/var/lib/kubelet/pods/485aaca4-79c9-43a6-890f-027930320d48/volumes" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.208358 5017 scope.go:117] "RemoveContainer" containerID="d8893e43c97db3642630ad5123ebf5beb52136d708962a0e4af92dc35927a00f" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.278837 5017 scope.go:117] "RemoveContainer" containerID="d6a842890092872bc99ef3cc4b7a0a17a34027b3b1b59d333f9b2250a24984af" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.313987 5017 scope.go:117] "RemoveContainer" containerID="7ba63192b2f1d1f17f0843b49c0e7a26e36bba8db193c114f4b86dffcf9c4f57" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.337710 5017 scope.go:117] "RemoveContainer" containerID="4968edbc67d199e151c3cfadae23c59a5bd0ef66ae75d9c98b233b354687e22a" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.367607 5017 scope.go:117] "RemoveContainer" containerID="c7c3ba3b05749f587e1869bb2967a2104143dfa655977bf2b117fd010ff26d71" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.407717 5017 scope.go:117] "RemoveContainer" containerID="fc8181badbf22d0a0ca2a38752e9a72d9fc5dfd63d92985c7fd235fc34836d5a" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.446273 5017 scope.go:117] "RemoveContainer" containerID="05d82a67cf4b6d0a7b95e554fe5b8dc77f21d424eec76141ed3b8b739537b1dc" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.480316 5017 scope.go:117] "RemoveContainer" containerID="d6de78aeb2689362859b99dec9103a593ca802a2f6c8ce748edc505d9f000880" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.504278 5017 scope.go:117] "RemoveContainer" containerID="ae39ebd849460fe9df6348a2030070d4372c857c8e320a52e9f26c2c698a8ef8" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.547900 5017 scope.go:117] "RemoveContainer" containerID="ad39d19c0eae6d091513b0dda8def0c81ac302c26687b45f9f7175e673527e1c" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.588398 5017 scope.go:117] "RemoveContainer" containerID="8045cd391fa98d128b172401a18c6cadab6e557db29267f45341f19f2caf8280" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.607879 5017 scope.go:117] "RemoveContainer" containerID="3fbc7f0307ba48e49b763df81a77e31cf68a82ad47bf30dc6675ba61085c7a13" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.634007 5017 scope.go:117] "RemoveContainer" containerID="b62cfb03118814e317118c0c2e8d9bf6c156cdb23506386fa46d085f58c451c9" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.654432 5017 scope.go:117] "RemoveContainer" containerID="6db3967a16e3e221bcaee7e55539ba2cd58d8ae575785bf003c4558191c1cde0" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.673882 5017 scope.go:117] "RemoveContainer" containerID="2f645a2b34a7b23c32b5ae9082fb6698bb9e0e0dd2c92e99e4f7cb9a5d290fe2" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.690370 5017 scope.go:117] "RemoveContainer" containerID="7524bbc873f647c6883ae3bbc0be1f236c76e6c7fba1d892c0445d6d6604fd55" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.709418 5017 scope.go:117] "RemoveContainer" containerID="d1aae7294767caefa69112c93ebe589a451ce29b834f11fdb8bd104f755b2457" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.732989 5017 scope.go:117] "RemoveContainer" containerID="05b0917387915cab5bed915929f39d6918f89117f895626caf87657510319aca" Jan 29 06:59:19 crc kubenswrapper[5017]: I0129 06:59:19.753153 5017 scope.go:117] "RemoveContainer" containerID="7f718e4405e859db950088c39e7a46f62463a642095bd1f0bb62bcdf8b9c85bc" Jan 29 06:59:56 crc kubenswrapper[5017]: I0129 06:59:56.539578 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:59:56 crc kubenswrapper[5017]: I0129 06:59:56.541324 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.381888 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz"] Jan 29 07:00:00 crc kubenswrapper[5017]: E0129 07:00:00.382425 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485aaca4-79c9-43a6-890f-027930320d48" containerName="extract-content" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.382446 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="485aaca4-79c9-43a6-890f-027930320d48" containerName="extract-content" Jan 29 07:00:00 crc kubenswrapper[5017]: E0129 07:00:00.382459 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485aaca4-79c9-43a6-890f-027930320d48" containerName="registry-server" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.382467 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="485aaca4-79c9-43a6-890f-027930320d48" containerName="registry-server" Jan 29 07:00:00 crc kubenswrapper[5017]: E0129 07:00:00.382484 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485aaca4-79c9-43a6-890f-027930320d48" containerName="extract-utilities" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.382493 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="485aaca4-79c9-43a6-890f-027930320d48" containerName="extract-utilities" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.382694 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="485aaca4-79c9-43a6-890f-027930320d48" containerName="registry-server" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.383364 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.386526 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.387849 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.399740 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz"] Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.447324 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mprcv\" (UniqueName: \"kubernetes.io/projected/59d62406-d251-4109-8b53-199276f89853-kube-api-access-mprcv\") pod \"collect-profiles-29494500-b5nnz\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.447417 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59d62406-d251-4109-8b53-199276f89853-config-volume\") pod \"collect-profiles-29494500-b5nnz\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.447447 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59d62406-d251-4109-8b53-199276f89853-secret-volume\") pod \"collect-profiles-29494500-b5nnz\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.549901 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59d62406-d251-4109-8b53-199276f89853-config-volume\") pod \"collect-profiles-29494500-b5nnz\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.549985 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59d62406-d251-4109-8b53-199276f89853-secret-volume\") pod \"collect-profiles-29494500-b5nnz\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.550065 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mprcv\" (UniqueName: \"kubernetes.io/projected/59d62406-d251-4109-8b53-199276f89853-kube-api-access-mprcv\") pod \"collect-profiles-29494500-b5nnz\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.551916 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59d62406-d251-4109-8b53-199276f89853-config-volume\") pod \"collect-profiles-29494500-b5nnz\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.566169 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59d62406-d251-4109-8b53-199276f89853-secret-volume\") pod \"collect-profiles-29494500-b5nnz\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.567938 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mprcv\" (UniqueName: \"kubernetes.io/projected/59d62406-d251-4109-8b53-199276f89853-kube-api-access-mprcv\") pod \"collect-profiles-29494500-b5nnz\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:00 crc kubenswrapper[5017]: I0129 07:00:00.707271 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:01 crc kubenswrapper[5017]: I0129 07:00:01.203333 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz"] Jan 29 07:00:01 crc kubenswrapper[5017]: I0129 07:00:01.355178 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" event={"ID":"59d62406-d251-4109-8b53-199276f89853","Type":"ContainerStarted","Data":"dbbf704b33962b8926aa4265edc64216a4c394cf1311e89cb7b9d611c186752a"} Jan 29 07:00:02 crc kubenswrapper[5017]: I0129 07:00:02.365508 5017 generic.go:334] "Generic (PLEG): container finished" podID="59d62406-d251-4109-8b53-199276f89853" containerID="589514c38a1db2f0cf875619ab13082e9ba23e54a5e90693d464e405f2c436dc" exitCode=0 Jan 29 07:00:02 crc kubenswrapper[5017]: I0129 07:00:02.365583 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" event={"ID":"59d62406-d251-4109-8b53-199276f89853","Type":"ContainerDied","Data":"589514c38a1db2f0cf875619ab13082e9ba23e54a5e90693d464e405f2c436dc"} Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.728365 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.826850 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59d62406-d251-4109-8b53-199276f89853-secret-volume\") pod \"59d62406-d251-4109-8b53-199276f89853\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.826945 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59d62406-d251-4109-8b53-199276f89853-config-volume\") pod \"59d62406-d251-4109-8b53-199276f89853\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.827013 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mprcv\" (UniqueName: \"kubernetes.io/projected/59d62406-d251-4109-8b53-199276f89853-kube-api-access-mprcv\") pod \"59d62406-d251-4109-8b53-199276f89853\" (UID: \"59d62406-d251-4109-8b53-199276f89853\") " Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.827709 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59d62406-d251-4109-8b53-199276f89853-config-volume" (OuterVolumeSpecName: "config-volume") pod "59d62406-d251-4109-8b53-199276f89853" (UID: "59d62406-d251-4109-8b53-199276f89853"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.833705 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59d62406-d251-4109-8b53-199276f89853-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "59d62406-d251-4109-8b53-199276f89853" (UID: "59d62406-d251-4109-8b53-199276f89853"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.835292 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59d62406-d251-4109-8b53-199276f89853-kube-api-access-mprcv" (OuterVolumeSpecName: "kube-api-access-mprcv") pod "59d62406-d251-4109-8b53-199276f89853" (UID: "59d62406-d251-4109-8b53-199276f89853"). InnerVolumeSpecName "kube-api-access-mprcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.928896 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mprcv\" (UniqueName: \"kubernetes.io/projected/59d62406-d251-4109-8b53-199276f89853-kube-api-access-mprcv\") on node \"crc\" DevicePath \"\"" Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.928993 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59d62406-d251-4109-8b53-199276f89853-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:00:03 crc kubenswrapper[5017]: I0129 07:00:03.929018 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59d62406-d251-4109-8b53-199276f89853-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:00:04 crc kubenswrapper[5017]: I0129 07:00:04.387143 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" event={"ID":"59d62406-d251-4109-8b53-199276f89853","Type":"ContainerDied","Data":"dbbf704b33962b8926aa4265edc64216a4c394cf1311e89cb7b9d611c186752a"} Jan 29 07:00:04 crc kubenswrapper[5017]: I0129 07:00:04.387199 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbbf704b33962b8926aa4265edc64216a4c394cf1311e89cb7b9d611c186752a" Jan 29 07:00:04 crc kubenswrapper[5017]: I0129 07:00:04.387222 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz" Jan 29 07:00:20 crc kubenswrapper[5017]: I0129 07:00:20.070848 5017 scope.go:117] "RemoveContainer" containerID="442e5abb015fed4f8c4c9b13038640d3c19653d0278b5712bce3dd46a40c4b6d" Jan 29 07:00:20 crc kubenswrapper[5017]: I0129 07:00:20.092386 5017 scope.go:117] "RemoveContainer" containerID="d610e404469ca76ec8329edcb79d636e2d0e6b2aa686e2a81c9c8111278f86cd" Jan 29 07:00:20 crc kubenswrapper[5017]: I0129 07:00:20.161416 5017 scope.go:117] "RemoveContainer" containerID="6cca86191df9f6f4e268e89b2a32499fc588d1bcd097392c370bdb142ec5eb90" Jan 29 07:00:20 crc kubenswrapper[5017]: I0129 07:00:20.201471 5017 scope.go:117] "RemoveContainer" containerID="10776bfed6b6bed671c469db271afab32367944233580d2f71aac9a20d6c8c5f" Jan 29 07:00:20 crc kubenswrapper[5017]: I0129 07:00:20.229879 5017 scope.go:117] "RemoveContainer" containerID="c3626a55e2e2035ab1274eb197cb04be9df1a946c7042639ef0362e5a1e980b2" Jan 29 07:00:20 crc kubenswrapper[5017]: I0129 07:00:20.288618 5017 scope.go:117] "RemoveContainer" containerID="ca433794f725d27e083742abb5dc11becd94156c42fefff51806909b06722939" Jan 29 07:00:20 crc kubenswrapper[5017]: I0129 07:00:20.339344 5017 scope.go:117] "RemoveContainer" containerID="9bc47a011cbee9c1f0f45b9670ed8044c92807a4765cd8f12b9edc01353442b0" Jan 29 07:00:20 crc kubenswrapper[5017]: I0129 07:00:20.379371 5017 scope.go:117] "RemoveContainer" containerID="7966f6aff7ce81cffecfcf8cdd883b5dbb79939b4152cdec01a78f8318795a91" Jan 29 07:00:26 crc kubenswrapper[5017]: I0129 07:00:26.539326 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:00:26 crc kubenswrapper[5017]: I0129 07:00:26.540418 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.023514 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x625j"] Jan 29 07:00:43 crc kubenswrapper[5017]: E0129 07:00:43.024774 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d62406-d251-4109-8b53-199276f89853" containerName="collect-profiles" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.024792 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d62406-d251-4109-8b53-199276f89853" containerName="collect-profiles" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.025005 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d62406-d251-4109-8b53-199276f89853" containerName="collect-profiles" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.026366 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.048984 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x625j"] Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.118189 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-utilities\") pod \"community-operators-x625j\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.118429 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-catalog-content\") pod \"community-operators-x625j\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.118537 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvfhm\" (UniqueName: \"kubernetes.io/projected/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-kube-api-access-tvfhm\") pod \"community-operators-x625j\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.219940 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-utilities\") pod \"community-operators-x625j\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.220637 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-catalog-content\") pod \"community-operators-x625j\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.220673 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvfhm\" (UniqueName: \"kubernetes.io/projected/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-kube-api-access-tvfhm\") pod \"community-operators-x625j\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.221561 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-utilities\") pod \"community-operators-x625j\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.221722 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-catalog-content\") pod \"community-operators-x625j\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.251722 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvfhm\" (UniqueName: \"kubernetes.io/projected/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-kube-api-access-tvfhm\") pod \"community-operators-x625j\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.367334 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.686968 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x625j"] Jan 29 07:00:43 crc kubenswrapper[5017]: I0129 07:00:43.783318 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x625j" event={"ID":"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1","Type":"ContainerStarted","Data":"0cf8180bab93b735654be06d7912e092435d4e821dfed4b9a2b7ec90a39cf729"} Jan 29 07:00:44 crc kubenswrapper[5017]: I0129 07:00:44.814349 5017 generic.go:334] "Generic (PLEG): container finished" podID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerID="f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85" exitCode=0 Jan 29 07:00:44 crc kubenswrapper[5017]: I0129 07:00:44.814468 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x625j" event={"ID":"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1","Type":"ContainerDied","Data":"f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85"} Jan 29 07:00:45 crc kubenswrapper[5017]: I0129 07:00:45.876804 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x625j" event={"ID":"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1","Type":"ContainerStarted","Data":"b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a"} Jan 29 07:00:46 crc kubenswrapper[5017]: I0129 07:00:46.894154 5017 generic.go:334] "Generic (PLEG): container finished" podID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerID="b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a" exitCode=0 Jan 29 07:00:46 crc kubenswrapper[5017]: I0129 07:00:46.894269 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x625j" event={"ID":"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1","Type":"ContainerDied","Data":"b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a"} Jan 29 07:00:47 crc kubenswrapper[5017]: I0129 07:00:47.907184 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x625j" event={"ID":"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1","Type":"ContainerStarted","Data":"3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab"} Jan 29 07:00:47 crc kubenswrapper[5017]: I0129 07:00:47.935898 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x625j" podStartSLOduration=3.377248213 podStartE2EDuration="5.935872433s" podCreationTimestamp="2026-01-29 07:00:42 +0000 UTC" firstStartedPulling="2026-01-29 07:00:44.817176506 +0000 UTC m=+1531.191624156" lastFinishedPulling="2026-01-29 07:00:47.375800726 +0000 UTC m=+1533.750248376" observedRunningTime="2026-01-29 07:00:47.934678125 +0000 UTC m=+1534.309125745" watchObservedRunningTime="2026-01-29 07:00:47.935872433 +0000 UTC m=+1534.310320043" Jan 29 07:00:53 crc kubenswrapper[5017]: I0129 07:00:53.368394 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:53 crc kubenswrapper[5017]: I0129 07:00:53.369212 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:53 crc kubenswrapper[5017]: I0129 07:00:53.416211 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:54 crc kubenswrapper[5017]: I0129 07:00:54.028445 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:54 crc kubenswrapper[5017]: I0129 07:00:54.087533 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x625j"] Jan 29 07:00:55 crc kubenswrapper[5017]: I0129 07:00:55.997770 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x625j" podUID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerName="registry-server" containerID="cri-o://3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab" gracePeriod=2 Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.531667 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.538970 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.539065 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.539161 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.539992 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.540077 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" gracePeriod=600 Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.599743 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-utilities\") pod \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.600003 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-catalog-content\") pod \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.600049 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvfhm\" (UniqueName: \"kubernetes.io/projected/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-kube-api-access-tvfhm\") pod \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\" (UID: \"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1\") " Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.600781 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-utilities" (OuterVolumeSpecName: "utilities") pod "53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" (UID: "53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.610495 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-kube-api-access-tvfhm" (OuterVolumeSpecName: "kube-api-access-tvfhm") pod "53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" (UID: "53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1"). InnerVolumeSpecName "kube-api-access-tvfhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.656373 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" (UID: "53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:00:56 crc kubenswrapper[5017]: E0129 07:00:56.696629 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.703220 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.703277 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvfhm\" (UniqueName: \"kubernetes.io/projected/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-kube-api-access-tvfhm\") on node \"crc\" DevicePath \"\"" Jan 29 07:00:56 crc kubenswrapper[5017]: I0129 07:00:56.703302 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.017034 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" exitCode=0 Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.017086 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab"} Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.017336 5017 scope.go:117] "RemoveContainer" containerID="f0fa6e2a79db70c1d184fc860ee5a35d194bde9485b97eabb60341d87278b250" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.020050 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:00:57 crc kubenswrapper[5017]: E0129 07:00:57.023701 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.026226 5017 generic.go:334] "Generic (PLEG): container finished" podID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerID="3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab" exitCode=0 Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.026312 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x625j" event={"ID":"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1","Type":"ContainerDied","Data":"3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab"} Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.026372 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x625j" event={"ID":"53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1","Type":"ContainerDied","Data":"0cf8180bab93b735654be06d7912e092435d4e821dfed4b9a2b7ec90a39cf729"} Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.026474 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x625j" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.053371 5017 scope.go:117] "RemoveContainer" containerID="3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.082633 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x625j"] Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.102205 5017 scope.go:117] "RemoveContainer" containerID="b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.104153 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x625j"] Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.126776 5017 scope.go:117] "RemoveContainer" containerID="f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.144747 5017 scope.go:117] "RemoveContainer" containerID="3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab" Jan 29 07:00:57 crc kubenswrapper[5017]: E0129 07:00:57.145324 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab\": container with ID starting with 3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab not found: ID does not exist" containerID="3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.145417 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab"} err="failed to get container status \"3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab\": rpc error: code = NotFound desc = could not find container \"3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab\": container with ID starting with 3efc32c61f5317589656eb72477760eda4a04837a1892e428e06d9ba721441ab not found: ID does not exist" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.145505 5017 scope.go:117] "RemoveContainer" containerID="b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a" Jan 29 07:00:57 crc kubenswrapper[5017]: E0129 07:00:57.146005 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a\": container with ID starting with b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a not found: ID does not exist" containerID="b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.146080 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a"} err="failed to get container status \"b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a\": rpc error: code = NotFound desc = could not find container \"b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a\": container with ID starting with b828db43707c7cd342fe4b0aaa128a6d4b2ded923568a48f689ed71e1ba1078a not found: ID does not exist" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.146140 5017 scope.go:117] "RemoveContainer" containerID="f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85" Jan 29 07:00:57 crc kubenswrapper[5017]: E0129 07:00:57.146778 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85\": container with ID starting with f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85 not found: ID does not exist" containerID="f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85" Jan 29 07:00:57 crc kubenswrapper[5017]: I0129 07:00:57.146854 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85"} err="failed to get container status \"f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85\": rpc error: code = NotFound desc = could not find container \"f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85\": container with ID starting with f25ad9b2437453c8c1dca15881f4f27c5e2bf73043e28cfad171718bfd0d6c85 not found: ID does not exist" Jan 29 07:00:58 crc kubenswrapper[5017]: I0129 07:00:58.330586 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" path="/var/lib/kubelet/pods/53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1/volumes" Jan 29 07:01:10 crc kubenswrapper[5017]: I0129 07:01:10.316810 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:01:10 crc kubenswrapper[5017]: E0129 07:01:10.317754 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.546621 5017 scope.go:117] "RemoveContainer" containerID="b76b348c9d2423b65dbe5c304b35ec3d820a382c48b70c033fa945d81791a518" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.584497 5017 scope.go:117] "RemoveContainer" containerID="6196da4d03910bc572acc5cc95df2d7dcae2ef492670dad5d9b79065497f7322" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.629524 5017 scope.go:117] "RemoveContainer" containerID="f77b5ae493cc98d9d20046895a4f54e4268e388e3d9a9a199fa78ef8cda58123" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.658913 5017 scope.go:117] "RemoveContainer" containerID="0c316a09cd764cd0d7717d88d306f363aee8a406a7aab49e4974af5176af2934" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.683402 5017 scope.go:117] "RemoveContainer" containerID="0fb96d66273e80c15d3617b63111d378c6fd6702c701289bf872838499797ba3" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.706419 5017 scope.go:117] "RemoveContainer" containerID="9faf0d173c0b59296fed359030686d4a096d13b44a22d7aecdca4136e3a15a7a" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.738911 5017 scope.go:117] "RemoveContainer" containerID="5a5553bca56453bc7dd383b571e644904570a6ff37646732ab46bfc63d4356bb" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.761162 5017 scope.go:117] "RemoveContainer" containerID="53c11f2f216b2c0277625e17b5979d133bf92078884d2553967dc21bb5b5ee56" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.789544 5017 scope.go:117] "RemoveContainer" containerID="3dfcd47ab3cac7b764900d044b1e708577dae1b232deab2ffb3281399ab171ba" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.840222 5017 scope.go:117] "RemoveContainer" containerID="a87ac4d6fcd2f35bc4f4ce7965fee1e0793f77b9c950d2afd37c8a0be776737e" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.869704 5017 scope.go:117] "RemoveContainer" containerID="55b4fe46b945125911fbc023ac6464f7526116f37457a6150cc275fbf8d698ab" Jan 29 07:01:20 crc kubenswrapper[5017]: I0129 07:01:20.932380 5017 scope.go:117] "RemoveContainer" containerID="d58e4b20a14bff37480389b7c1c562703d0a87748863f2677e04997a3a47605b" Jan 29 07:01:23 crc kubenswrapper[5017]: I0129 07:01:23.317303 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:01:23 crc kubenswrapper[5017]: E0129 07:01:23.318044 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.339491 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qpg5c"] Jan 29 07:01:29 crc kubenswrapper[5017]: E0129 07:01:29.343574 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerName="extract-content" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.343622 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerName="extract-content" Jan 29 07:01:29 crc kubenswrapper[5017]: E0129 07:01:29.343639 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerName="registry-server" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.343646 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerName="registry-server" Jan 29 07:01:29 crc kubenswrapper[5017]: E0129 07:01:29.343655 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerName="extract-utilities" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.343662 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerName="extract-utilities" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.343850 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="53242c86-ab5f-4e0e-a472-a3f0ebb9f7f1" containerName="registry-server" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.345145 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.369295 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qpg5c"] Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.451825 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-utilities\") pod \"certified-operators-qpg5c\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.453284 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76vt4\" (UniqueName: \"kubernetes.io/projected/f82c3dd5-e32e-437f-a554-5605bd853d23-kube-api-access-76vt4\") pod \"certified-operators-qpg5c\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.453715 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-catalog-content\") pod \"certified-operators-qpg5c\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.555606 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76vt4\" (UniqueName: \"kubernetes.io/projected/f82c3dd5-e32e-437f-a554-5605bd853d23-kube-api-access-76vt4\") pod \"certified-operators-qpg5c\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.555721 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-catalog-content\") pod \"certified-operators-qpg5c\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.555862 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-utilities\") pod \"certified-operators-qpg5c\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.556470 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-catalog-content\") pod \"certified-operators-qpg5c\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.556734 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-utilities\") pod \"certified-operators-qpg5c\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.586037 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76vt4\" (UniqueName: \"kubernetes.io/projected/f82c3dd5-e32e-437f-a554-5605bd853d23-kube-api-access-76vt4\") pod \"certified-operators-qpg5c\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:29 crc kubenswrapper[5017]: I0129 07:01:29.704458 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:30 crc kubenswrapper[5017]: I0129 07:01:30.222900 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qpg5c"] Jan 29 07:01:30 crc kubenswrapper[5017]: I0129 07:01:30.373591 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpg5c" event={"ID":"f82c3dd5-e32e-437f-a554-5605bd853d23","Type":"ContainerStarted","Data":"6a0d31e52eb662ff54465a298a2d7392ece1906799fe66cb361f8953b6e21885"} Jan 29 07:01:31 crc kubenswrapper[5017]: I0129 07:01:31.390738 5017 generic.go:334] "Generic (PLEG): container finished" podID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerID="ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966" exitCode=0 Jan 29 07:01:31 crc kubenswrapper[5017]: I0129 07:01:31.390822 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpg5c" event={"ID":"f82c3dd5-e32e-437f-a554-5605bd853d23","Type":"ContainerDied","Data":"ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966"} Jan 29 07:01:32 crc kubenswrapper[5017]: I0129 07:01:32.405531 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpg5c" event={"ID":"f82c3dd5-e32e-437f-a554-5605bd853d23","Type":"ContainerStarted","Data":"1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a"} Jan 29 07:01:33 crc kubenswrapper[5017]: I0129 07:01:33.415694 5017 generic.go:334] "Generic (PLEG): container finished" podID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerID="1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a" exitCode=0 Jan 29 07:01:33 crc kubenswrapper[5017]: I0129 07:01:33.415775 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpg5c" event={"ID":"f82c3dd5-e32e-437f-a554-5605bd853d23","Type":"ContainerDied","Data":"1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a"} Jan 29 07:01:33 crc kubenswrapper[5017]: I0129 07:01:33.415809 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpg5c" event={"ID":"f82c3dd5-e32e-437f-a554-5605bd853d23","Type":"ContainerStarted","Data":"1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434"} Jan 29 07:01:33 crc kubenswrapper[5017]: I0129 07:01:33.445472 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qpg5c" podStartSLOduration=2.9634957760000002 podStartE2EDuration="4.445446965s" podCreationTimestamp="2026-01-29 07:01:29 +0000 UTC" firstStartedPulling="2026-01-29 07:01:31.393572552 +0000 UTC m=+1577.768020212" lastFinishedPulling="2026-01-29 07:01:32.875523781 +0000 UTC m=+1579.249971401" observedRunningTime="2026-01-29 07:01:33.438390816 +0000 UTC m=+1579.812838436" watchObservedRunningTime="2026-01-29 07:01:33.445446965 +0000 UTC m=+1579.819894585" Jan 29 07:01:35 crc kubenswrapper[5017]: I0129 07:01:35.317180 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:01:35 crc kubenswrapper[5017]: E0129 07:01:35.317975 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:01:39 crc kubenswrapper[5017]: I0129 07:01:39.704683 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:39 crc kubenswrapper[5017]: I0129 07:01:39.705652 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:39 crc kubenswrapper[5017]: I0129 07:01:39.776080 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:40 crc kubenswrapper[5017]: I0129 07:01:40.554379 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:40 crc kubenswrapper[5017]: I0129 07:01:40.648516 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qpg5c"] Jan 29 07:01:42 crc kubenswrapper[5017]: I0129 07:01:42.494441 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qpg5c" podUID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerName="registry-server" containerID="cri-o://1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434" gracePeriod=2 Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.118026 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.201301 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-catalog-content\") pod \"f82c3dd5-e32e-437f-a554-5605bd853d23\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.201364 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-utilities\") pod \"f82c3dd5-e32e-437f-a554-5605bd853d23\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.201550 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76vt4\" (UniqueName: \"kubernetes.io/projected/f82c3dd5-e32e-437f-a554-5605bd853d23-kube-api-access-76vt4\") pod \"f82c3dd5-e32e-437f-a554-5605bd853d23\" (UID: \"f82c3dd5-e32e-437f-a554-5605bd853d23\") " Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.202551 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-utilities" (OuterVolumeSpecName: "utilities") pod "f82c3dd5-e32e-437f-a554-5605bd853d23" (UID: "f82c3dd5-e32e-437f-a554-5605bd853d23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.209187 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f82c3dd5-e32e-437f-a554-5605bd853d23-kube-api-access-76vt4" (OuterVolumeSpecName: "kube-api-access-76vt4") pod "f82c3dd5-e32e-437f-a554-5605bd853d23" (UID: "f82c3dd5-e32e-437f-a554-5605bd853d23"). InnerVolumeSpecName "kube-api-access-76vt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.258435 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f82c3dd5-e32e-437f-a554-5605bd853d23" (UID: "f82c3dd5-e32e-437f-a554-5605bd853d23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.304031 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.304088 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f82c3dd5-e32e-437f-a554-5605bd853d23-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.304104 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76vt4\" (UniqueName: \"kubernetes.io/projected/f82c3dd5-e32e-437f-a554-5605bd853d23-kube-api-access-76vt4\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.506058 5017 generic.go:334] "Generic (PLEG): container finished" podID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerID="1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434" exitCode=0 Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.506113 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpg5c" event={"ID":"f82c3dd5-e32e-437f-a554-5605bd853d23","Type":"ContainerDied","Data":"1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434"} Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.506152 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qpg5c" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.506171 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qpg5c" event={"ID":"f82c3dd5-e32e-437f-a554-5605bd853d23","Type":"ContainerDied","Data":"6a0d31e52eb662ff54465a298a2d7392ece1906799fe66cb361f8953b6e21885"} Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.506199 5017 scope.go:117] "RemoveContainer" containerID="1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.538360 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qpg5c"] Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.542414 5017 scope.go:117] "RemoveContainer" containerID="1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.547462 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qpg5c"] Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.567244 5017 scope.go:117] "RemoveContainer" containerID="ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.601061 5017 scope.go:117] "RemoveContainer" containerID="1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434" Jan 29 07:01:43 crc kubenswrapper[5017]: E0129 07:01:43.602316 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434\": container with ID starting with 1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434 not found: ID does not exist" containerID="1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.602386 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434"} err="failed to get container status \"1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434\": rpc error: code = NotFound desc = could not find container \"1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434\": container with ID starting with 1ee0f0e4ba99c9c25c6f244d94b5ddb489880f40804f7f8cd3faa93acdc01434 not found: ID does not exist" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.602431 5017 scope.go:117] "RemoveContainer" containerID="1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a" Jan 29 07:01:43 crc kubenswrapper[5017]: E0129 07:01:43.603108 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a\": container with ID starting with 1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a not found: ID does not exist" containerID="1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.603159 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a"} err="failed to get container status \"1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a\": rpc error: code = NotFound desc = could not find container \"1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a\": container with ID starting with 1ef44991bf083845b92c26c22610faf50e08624d7f4277abdddf8934dd637b3a not found: ID does not exist" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.603193 5017 scope.go:117] "RemoveContainer" containerID="ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966" Jan 29 07:01:43 crc kubenswrapper[5017]: E0129 07:01:43.603610 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966\": container with ID starting with ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966 not found: ID does not exist" containerID="ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966" Jan 29 07:01:43 crc kubenswrapper[5017]: I0129 07:01:43.603638 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966"} err="failed to get container status \"ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966\": rpc error: code = NotFound desc = could not find container \"ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966\": container with ID starting with ca9635b8af29ff8f0d8e17ff183c8875ea807de270d0201067b9782ed4b3a966 not found: ID does not exist" Jan 29 07:01:44 crc kubenswrapper[5017]: I0129 07:01:44.326623 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f82c3dd5-e32e-437f-a554-5605bd853d23" path="/var/lib/kubelet/pods/f82c3dd5-e32e-437f-a554-5605bd853d23/volumes" Jan 29 07:01:47 crc kubenswrapper[5017]: I0129 07:01:47.316765 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:01:47 crc kubenswrapper[5017]: E0129 07:01:47.317486 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:01:58 crc kubenswrapper[5017]: I0129 07:01:58.317053 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:01:58 crc kubenswrapper[5017]: E0129 07:01:58.318418 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:02:13 crc kubenswrapper[5017]: I0129 07:02:13.315879 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:02:13 crc kubenswrapper[5017]: E0129 07:02:13.318151 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:02:21 crc kubenswrapper[5017]: I0129 07:02:21.111555 5017 scope.go:117] "RemoveContainer" containerID="7126adb7a43e0f5a3fc8b5d5e778cd421a3c22b31f278ea3a1f03d7a07ee2372" Jan 29 07:02:21 crc kubenswrapper[5017]: I0129 07:02:21.165844 5017 scope.go:117] "RemoveContainer" containerID="c788ef46bab35f53e33c6baaa48d0bd6f3b7c657b75f112f88d7a6c1905f839d" Jan 29 07:02:21 crc kubenswrapper[5017]: I0129 07:02:21.201376 5017 scope.go:117] "RemoveContainer" containerID="d6f73092504fdd82e85c07a381c7b43721cdbd10af54bf12d28c31365e79fc0b" Jan 29 07:02:21 crc kubenswrapper[5017]: I0129 07:02:21.234008 5017 scope.go:117] "RemoveContainer" containerID="ef3451697bd4ac7b679400900f02f4c1ce12227edc277ad5364540b647c35d9d" Jan 29 07:02:21 crc kubenswrapper[5017]: I0129 07:02:21.278251 5017 scope.go:117] "RemoveContainer" containerID="f85c4459d8f828738428873cf981d75378869b406a3bc14f9884b979c0dfee7a" Jan 29 07:02:26 crc kubenswrapper[5017]: I0129 07:02:26.315941 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:02:26 crc kubenswrapper[5017]: E0129 07:02:26.317477 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:02:40 crc kubenswrapper[5017]: I0129 07:02:40.324299 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:02:40 crc kubenswrapper[5017]: E0129 07:02:40.325917 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:02:55 crc kubenswrapper[5017]: I0129 07:02:55.316716 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:02:55 crc kubenswrapper[5017]: E0129 07:02:55.319471 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:03:07 crc kubenswrapper[5017]: I0129 07:03:07.317370 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:03:07 crc kubenswrapper[5017]: E0129 07:03:07.319250 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:03:19 crc kubenswrapper[5017]: I0129 07:03:19.316605 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:03:19 crc kubenswrapper[5017]: E0129 07:03:19.317764 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:03:21 crc kubenswrapper[5017]: I0129 07:03:21.406386 5017 scope.go:117] "RemoveContainer" containerID="c43149c8d6f94b05d7adff740855d48a649c2f7ea9f1f958ed0221ac67602ca1" Jan 29 07:03:21 crc kubenswrapper[5017]: I0129 07:03:21.440476 5017 scope.go:117] "RemoveContainer" containerID="5c19de805cb364596b5009993949de148a0ae873176b17c0e742ef83f5bf9bd2" Jan 29 07:03:21 crc kubenswrapper[5017]: I0129 07:03:21.470589 5017 scope.go:117] "RemoveContainer" containerID="dbd1a320619a408df31957da8f6a7e77196f94d06e34b7ba9e869e93506c7d1f" Jan 29 07:03:31 crc kubenswrapper[5017]: I0129 07:03:31.316312 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:03:31 crc kubenswrapper[5017]: E0129 07:03:31.317064 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:03:44 crc kubenswrapper[5017]: I0129 07:03:44.321891 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:03:44 crc kubenswrapper[5017]: E0129 07:03:44.323184 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:03:57 crc kubenswrapper[5017]: I0129 07:03:57.316805 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:03:57 crc kubenswrapper[5017]: E0129 07:03:57.318089 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:04:12 crc kubenswrapper[5017]: I0129 07:04:12.317094 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:04:12 crc kubenswrapper[5017]: E0129 07:04:12.318275 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:04:23 crc kubenswrapper[5017]: I0129 07:04:23.318675 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:04:23 crc kubenswrapper[5017]: E0129 07:04:23.319823 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:04:36 crc kubenswrapper[5017]: I0129 07:04:36.316166 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:04:36 crc kubenswrapper[5017]: E0129 07:04:36.316765 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:04:50 crc kubenswrapper[5017]: I0129 07:04:50.316086 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:04:50 crc kubenswrapper[5017]: E0129 07:04:50.317096 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:05:01 crc kubenswrapper[5017]: I0129 07:05:01.317434 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:05:01 crc kubenswrapper[5017]: E0129 07:05:01.318876 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:05:16 crc kubenswrapper[5017]: I0129 07:05:16.324559 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:05:16 crc kubenswrapper[5017]: E0129 07:05:16.326024 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:05:27 crc kubenswrapper[5017]: I0129 07:05:27.317563 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:05:27 crc kubenswrapper[5017]: E0129 07:05:27.318895 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:05:40 crc kubenswrapper[5017]: I0129 07:05:40.317454 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:05:40 crc kubenswrapper[5017]: E0129 07:05:40.318492 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:05:52 crc kubenswrapper[5017]: I0129 07:05:52.317175 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:05:52 crc kubenswrapper[5017]: E0129 07:05:52.318194 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:06:06 crc kubenswrapper[5017]: I0129 07:06:06.319785 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:06:07 crc kubenswrapper[5017]: I0129 07:06:07.161414 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"7a4f393bca20d197057515ffc36d02d4ae48cf3634f3962f9cfd446363d9d028"} Jan 29 07:08:26 crc kubenswrapper[5017]: I0129 07:08:26.538998 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:08:26 crc kubenswrapper[5017]: I0129 07:08:26.540141 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:08:56 crc kubenswrapper[5017]: I0129 07:08:56.539600 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:08:56 crc kubenswrapper[5017]: I0129 07:08:56.541300 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:09:26 crc kubenswrapper[5017]: I0129 07:09:26.539685 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:09:26 crc kubenswrapper[5017]: I0129 07:09:26.540782 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:09:26 crc kubenswrapper[5017]: I0129 07:09:26.540864 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:09:26 crc kubenswrapper[5017]: I0129 07:09:26.542293 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a4f393bca20d197057515ffc36d02d4ae48cf3634f3962f9cfd446363d9d028"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:09:26 crc kubenswrapper[5017]: I0129 07:09:26.542417 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://7a4f393bca20d197057515ffc36d02d4ae48cf3634f3962f9cfd446363d9d028" gracePeriod=600 Jan 29 07:09:27 crc kubenswrapper[5017]: I0129 07:09:27.131681 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="7a4f393bca20d197057515ffc36d02d4ae48cf3634f3962f9cfd446363d9d028" exitCode=0 Jan 29 07:09:27 crc kubenswrapper[5017]: I0129 07:09:27.131780 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"7a4f393bca20d197057515ffc36d02d4ae48cf3634f3962f9cfd446363d9d028"} Jan 29 07:09:27 crc kubenswrapper[5017]: I0129 07:09:27.132209 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7"} Jan 29 07:09:27 crc kubenswrapper[5017]: I0129 07:09:27.132242 5017 scope.go:117] "RemoveContainer" containerID="8db4bfbb76a49639afbf231ff068e7d8be3ddf559761058f493b985b4831c9ab" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.385900 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t5nvf"] Jan 29 07:09:57 crc kubenswrapper[5017]: E0129 07:09:57.387150 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerName="extract-utilities" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.387170 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerName="extract-utilities" Jan 29 07:09:57 crc kubenswrapper[5017]: E0129 07:09:57.387190 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerName="registry-server" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.387198 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerName="registry-server" Jan 29 07:09:57 crc kubenswrapper[5017]: E0129 07:09:57.387225 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerName="extract-content" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.387234 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerName="extract-content" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.387443 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f82c3dd5-e32e-437f-a554-5605bd853d23" containerName="registry-server" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.389240 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.410415 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5nvf"] Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.428126 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdrh\" (UniqueName: \"kubernetes.io/projected/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-kube-api-access-vsdrh\") pod \"redhat-marketplace-t5nvf\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.428560 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-catalog-content\") pod \"redhat-marketplace-t5nvf\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.428833 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-utilities\") pod \"redhat-marketplace-t5nvf\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.531068 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdrh\" (UniqueName: \"kubernetes.io/projected/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-kube-api-access-vsdrh\") pod \"redhat-marketplace-t5nvf\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.531436 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-catalog-content\") pod \"redhat-marketplace-t5nvf\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.531604 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-utilities\") pod \"redhat-marketplace-t5nvf\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.532206 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-catalog-content\") pod \"redhat-marketplace-t5nvf\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.532252 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-utilities\") pod \"redhat-marketplace-t5nvf\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.556667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdrh\" (UniqueName: \"kubernetes.io/projected/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-kube-api-access-vsdrh\") pod \"redhat-marketplace-t5nvf\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:57 crc kubenswrapper[5017]: I0129 07:09:57.711793 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:09:58 crc kubenswrapper[5017]: I0129 07:09:58.215284 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5nvf"] Jan 29 07:09:58 crc kubenswrapper[5017]: I0129 07:09:58.394355 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5nvf" event={"ID":"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b","Type":"ContainerStarted","Data":"efdeb14a93f7ffda3a3cf5ade04a791cc6e09e13dc69a1bbaf6df6352e826972"} Jan 29 07:09:59 crc kubenswrapper[5017]: I0129 07:09:59.409171 5017 generic.go:334] "Generic (PLEG): container finished" podID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerID="1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859" exitCode=0 Jan 29 07:09:59 crc kubenswrapper[5017]: I0129 07:09:59.409595 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5nvf" event={"ID":"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b","Type":"ContainerDied","Data":"1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859"} Jan 29 07:09:59 crc kubenswrapper[5017]: I0129 07:09:59.412509 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:10:00 crc kubenswrapper[5017]: I0129 07:10:00.425691 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5nvf" event={"ID":"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b","Type":"ContainerStarted","Data":"ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766"} Jan 29 07:10:01 crc kubenswrapper[5017]: I0129 07:10:01.438405 5017 generic.go:334] "Generic (PLEG): container finished" podID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerID="ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766" exitCode=0 Jan 29 07:10:01 crc kubenswrapper[5017]: I0129 07:10:01.438483 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5nvf" event={"ID":"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b","Type":"ContainerDied","Data":"ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766"} Jan 29 07:10:02 crc kubenswrapper[5017]: I0129 07:10:02.464583 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5nvf" event={"ID":"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b","Type":"ContainerStarted","Data":"2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc"} Jan 29 07:10:02 crc kubenswrapper[5017]: I0129 07:10:02.495660 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t5nvf" podStartSLOduration=3.016229783 podStartE2EDuration="5.495633936s" podCreationTimestamp="2026-01-29 07:09:57 +0000 UTC" firstStartedPulling="2026-01-29 07:09:59.41214092 +0000 UTC m=+2085.786588540" lastFinishedPulling="2026-01-29 07:10:01.891545073 +0000 UTC m=+2088.265992693" observedRunningTime="2026-01-29 07:10:02.486902582 +0000 UTC m=+2088.861350202" watchObservedRunningTime="2026-01-29 07:10:02.495633936 +0000 UTC m=+2088.870081556" Jan 29 07:10:07 crc kubenswrapper[5017]: I0129 07:10:07.713017 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:10:07 crc kubenswrapper[5017]: I0129 07:10:07.714327 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:10:07 crc kubenswrapper[5017]: I0129 07:10:07.782520 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:10:08 crc kubenswrapper[5017]: I0129 07:10:08.590798 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:10:08 crc kubenswrapper[5017]: I0129 07:10:08.667701 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5nvf"] Jan 29 07:10:10 crc kubenswrapper[5017]: I0129 07:10:10.533045 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t5nvf" podUID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerName="registry-server" containerID="cri-o://2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc" gracePeriod=2 Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.049328 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.241708 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-utilities\") pod \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.242403 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-catalog-content\") pod \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.242598 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsdrh\" (UniqueName: \"kubernetes.io/projected/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-kube-api-access-vsdrh\") pod \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\" (UID: \"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b\") " Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.243170 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-utilities" (OuterVolumeSpecName: "utilities") pod "96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" (UID: "96c1c05d-009c-4c0a-9acb-b8cc3505ff4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.249335 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-kube-api-access-vsdrh" (OuterVolumeSpecName: "kube-api-access-vsdrh") pod "96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" (UID: "96c1c05d-009c-4c0a-9acb-b8cc3505ff4b"). InnerVolumeSpecName "kube-api-access-vsdrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.272661 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" (UID: "96c1c05d-009c-4c0a-9acb-b8cc3505ff4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.344683 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.344725 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.344743 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsdrh\" (UniqueName: \"kubernetes.io/projected/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b-kube-api-access-vsdrh\") on node \"crc\" DevicePath \"\"" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.544511 5017 generic.go:334] "Generic (PLEG): container finished" podID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerID="2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc" exitCode=0 Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.544562 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5nvf" event={"ID":"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b","Type":"ContainerDied","Data":"2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc"} Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.544594 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5nvf" event={"ID":"96c1c05d-009c-4c0a-9acb-b8cc3505ff4b","Type":"ContainerDied","Data":"efdeb14a93f7ffda3a3cf5ade04a791cc6e09e13dc69a1bbaf6df6352e826972"} Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.544602 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5nvf" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.544614 5017 scope.go:117] "RemoveContainer" containerID="2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.567310 5017 scope.go:117] "RemoveContainer" containerID="ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.592396 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5nvf"] Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.602252 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5nvf"] Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.606255 5017 scope.go:117] "RemoveContainer" containerID="1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.631179 5017 scope.go:117] "RemoveContainer" containerID="2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc" Jan 29 07:10:11 crc kubenswrapper[5017]: E0129 07:10:11.631569 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc\": container with ID starting with 2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc not found: ID does not exist" containerID="2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.631610 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc"} err="failed to get container status \"2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc\": rpc error: code = NotFound desc = could not find container \"2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc\": container with ID starting with 2f65d74b08bd5f041d8e4a68323d6f5550a0d8f3d2afa015c2086427e08268bc not found: ID does not exist" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.631637 5017 scope.go:117] "RemoveContainer" containerID="ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766" Jan 29 07:10:11 crc kubenswrapper[5017]: E0129 07:10:11.631829 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766\": container with ID starting with ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766 not found: ID does not exist" containerID="ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.631849 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766"} err="failed to get container status \"ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766\": rpc error: code = NotFound desc = could not find container \"ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766\": container with ID starting with ac8566865fbc3b8d69481e0f14b4a7b13bb09e0aaec1b889a8725235e08cc766 not found: ID does not exist" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.631863 5017 scope.go:117] "RemoveContainer" containerID="1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859" Jan 29 07:10:11 crc kubenswrapper[5017]: E0129 07:10:11.632117 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859\": container with ID starting with 1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859 not found: ID does not exist" containerID="1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859" Jan 29 07:10:11 crc kubenswrapper[5017]: I0129 07:10:11.632137 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859"} err="failed to get container status \"1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859\": rpc error: code = NotFound desc = could not find container \"1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859\": container with ID starting with 1d1e192904de8975e58e9bd230881b1935cb903ae61520ebcb3a2fc319efb859 not found: ID does not exist" Jan 29 07:10:12 crc kubenswrapper[5017]: I0129 07:10:12.332536 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" path="/var/lib/kubelet/pods/96c1c05d-009c-4c0a-9acb-b8cc3505ff4b/volumes" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.723568 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kqrqt"] Jan 29 07:10:46 crc kubenswrapper[5017]: E0129 07:10:46.725393 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerName="extract-content" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.725420 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerName="extract-content" Jan 29 07:10:46 crc kubenswrapper[5017]: E0129 07:10:46.725454 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerName="extract-utilities" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.725466 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerName="extract-utilities" Jan 29 07:10:46 crc kubenswrapper[5017]: E0129 07:10:46.725485 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerName="registry-server" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.725496 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerName="registry-server" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.725769 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c1c05d-009c-4c0a-9acb-b8cc3505ff4b" containerName="registry-server" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.727608 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.746300 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kqrqt"] Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.789814 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh8rz\" (UniqueName: \"kubernetes.io/projected/4cb122f5-27e0-4d5f-942b-92338d96d165-kube-api-access-dh8rz\") pod \"community-operators-kqrqt\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.789897 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-catalog-content\") pod \"community-operators-kqrqt\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.790003 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-utilities\") pod \"community-operators-kqrqt\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.891428 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-utilities\") pod \"community-operators-kqrqt\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.891514 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh8rz\" (UniqueName: \"kubernetes.io/projected/4cb122f5-27e0-4d5f-942b-92338d96d165-kube-api-access-dh8rz\") pod \"community-operators-kqrqt\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.891558 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-catalog-content\") pod \"community-operators-kqrqt\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.892126 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-utilities\") pod \"community-operators-kqrqt\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.892193 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-catalog-content\") pod \"community-operators-kqrqt\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:46 crc kubenswrapper[5017]: I0129 07:10:46.923365 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh8rz\" (UniqueName: \"kubernetes.io/projected/4cb122f5-27e0-4d5f-942b-92338d96d165-kube-api-access-dh8rz\") pod \"community-operators-kqrqt\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:47 crc kubenswrapper[5017]: I0129 07:10:47.053274 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:47 crc kubenswrapper[5017]: I0129 07:10:47.679116 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kqrqt"] Jan 29 07:10:47 crc kubenswrapper[5017]: I0129 07:10:47.892321 5017 generic.go:334] "Generic (PLEG): container finished" podID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerID="b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4" exitCode=0 Jan 29 07:10:47 crc kubenswrapper[5017]: I0129 07:10:47.892376 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqrqt" event={"ID":"4cb122f5-27e0-4d5f-942b-92338d96d165","Type":"ContainerDied","Data":"b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4"} Jan 29 07:10:47 crc kubenswrapper[5017]: I0129 07:10:47.892410 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqrqt" event={"ID":"4cb122f5-27e0-4d5f-942b-92338d96d165","Type":"ContainerStarted","Data":"3fa889eb3dfe0613206490568c9f56601c5d32cca2daf1b40a9135554b183ff8"} Jan 29 07:10:48 crc kubenswrapper[5017]: I0129 07:10:48.901308 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqrqt" event={"ID":"4cb122f5-27e0-4d5f-942b-92338d96d165","Type":"ContainerStarted","Data":"d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f"} Jan 29 07:10:49 crc kubenswrapper[5017]: I0129 07:10:49.914621 5017 generic.go:334] "Generic (PLEG): container finished" podID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerID="d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f" exitCode=0 Jan 29 07:10:49 crc kubenswrapper[5017]: I0129 07:10:49.914730 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqrqt" event={"ID":"4cb122f5-27e0-4d5f-942b-92338d96d165","Type":"ContainerDied","Data":"d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f"} Jan 29 07:10:50 crc kubenswrapper[5017]: I0129 07:10:50.924139 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqrqt" event={"ID":"4cb122f5-27e0-4d5f-942b-92338d96d165","Type":"ContainerStarted","Data":"525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c"} Jan 29 07:10:50 crc kubenswrapper[5017]: I0129 07:10:50.950027 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kqrqt" podStartSLOduration=2.541771867 podStartE2EDuration="4.950006738s" podCreationTimestamp="2026-01-29 07:10:46 +0000 UTC" firstStartedPulling="2026-01-29 07:10:47.894667964 +0000 UTC m=+2134.269115574" lastFinishedPulling="2026-01-29 07:10:50.302902795 +0000 UTC m=+2136.677350445" observedRunningTime="2026-01-29 07:10:50.946047683 +0000 UTC m=+2137.320495303" watchObservedRunningTime="2026-01-29 07:10:50.950006738 +0000 UTC m=+2137.324454348" Jan 29 07:10:57 crc kubenswrapper[5017]: I0129 07:10:57.053872 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:57 crc kubenswrapper[5017]: I0129 07:10:57.054710 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:57 crc kubenswrapper[5017]: I0129 07:10:57.108218 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:58 crc kubenswrapper[5017]: I0129 07:10:58.036443 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:10:58 crc kubenswrapper[5017]: I0129 07:10:58.094768 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kqrqt"] Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.012454 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kqrqt" podUID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerName="registry-server" containerID="cri-o://525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c" gracePeriod=2 Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.647814 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.745015 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-catalog-content\") pod \"4cb122f5-27e0-4d5f-942b-92338d96d165\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.745125 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh8rz\" (UniqueName: \"kubernetes.io/projected/4cb122f5-27e0-4d5f-942b-92338d96d165-kube-api-access-dh8rz\") pod \"4cb122f5-27e0-4d5f-942b-92338d96d165\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.745337 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-utilities\") pod \"4cb122f5-27e0-4d5f-942b-92338d96d165\" (UID: \"4cb122f5-27e0-4d5f-942b-92338d96d165\") " Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.747864 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-utilities" (OuterVolumeSpecName: "utilities") pod "4cb122f5-27e0-4d5f-942b-92338d96d165" (UID: "4cb122f5-27e0-4d5f-942b-92338d96d165"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.755333 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb122f5-27e0-4d5f-942b-92338d96d165-kube-api-access-dh8rz" (OuterVolumeSpecName: "kube-api-access-dh8rz") pod "4cb122f5-27e0-4d5f-942b-92338d96d165" (UID: "4cb122f5-27e0-4d5f-942b-92338d96d165"). InnerVolumeSpecName "kube-api-access-dh8rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.812801 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cb122f5-27e0-4d5f-942b-92338d96d165" (UID: "4cb122f5-27e0-4d5f-942b-92338d96d165"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.847319 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.847360 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb122f5-27e0-4d5f-942b-92338d96d165-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:11:00 crc kubenswrapper[5017]: I0129 07:11:00.847375 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh8rz\" (UniqueName: \"kubernetes.io/projected/4cb122f5-27e0-4d5f-942b-92338d96d165-kube-api-access-dh8rz\") on node \"crc\" DevicePath \"\"" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.023691 5017 generic.go:334] "Generic (PLEG): container finished" podID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerID="525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c" exitCode=0 Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.023742 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqrqt" event={"ID":"4cb122f5-27e0-4d5f-942b-92338d96d165","Type":"ContainerDied","Data":"525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c"} Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.023775 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqrqt" event={"ID":"4cb122f5-27e0-4d5f-942b-92338d96d165","Type":"ContainerDied","Data":"3fa889eb3dfe0613206490568c9f56601c5d32cca2daf1b40a9135554b183ff8"} Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.023793 5017 scope.go:117] "RemoveContainer" containerID="525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.023950 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqrqt" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.063911 5017 scope.go:117] "RemoveContainer" containerID="d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.068434 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kqrqt"] Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.078706 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kqrqt"] Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.102096 5017 scope.go:117] "RemoveContainer" containerID="b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.128721 5017 scope.go:117] "RemoveContainer" containerID="525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c" Jan 29 07:11:01 crc kubenswrapper[5017]: E0129 07:11:01.129545 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c\": container with ID starting with 525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c not found: ID does not exist" containerID="525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.129714 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c"} err="failed to get container status \"525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c\": rpc error: code = NotFound desc = could not find container \"525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c\": container with ID starting with 525fe6cc75daead64a804fc86a6f60989e3d6cffda5443318d9b9cbb7ed4598c not found: ID does not exist" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.129761 5017 scope.go:117] "RemoveContainer" containerID="d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f" Jan 29 07:11:01 crc kubenswrapper[5017]: E0129 07:11:01.130318 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f\": container with ID starting with d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f not found: ID does not exist" containerID="d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.130365 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f"} err="failed to get container status \"d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f\": rpc error: code = NotFound desc = could not find container \"d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f\": container with ID starting with d6dba4d99e9afafa1b587c64cdafd65755bd756288aef531a93f277bb5bbea5f not found: ID does not exist" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.130397 5017 scope.go:117] "RemoveContainer" containerID="b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4" Jan 29 07:11:01 crc kubenswrapper[5017]: E0129 07:11:01.130897 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4\": container with ID starting with b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4 not found: ID does not exist" containerID="b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4" Jan 29 07:11:01 crc kubenswrapper[5017]: I0129 07:11:01.130937 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4"} err="failed to get container status \"b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4\": rpc error: code = NotFound desc = could not find container \"b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4\": container with ID starting with b0040b69cbd513ce7242568c56229a32df4137de47bd9e2a50ea563b81de9bd4 not found: ID does not exist" Jan 29 07:11:02 crc kubenswrapper[5017]: I0129 07:11:02.339872 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cb122f5-27e0-4d5f-942b-92338d96d165" path="/var/lib/kubelet/pods/4cb122f5-27e0-4d5f-942b-92338d96d165/volumes" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.456690 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dwvmg"] Jan 29 07:11:24 crc kubenswrapper[5017]: E0129 07:11:24.461277 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerName="extract-content" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.461441 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerName="extract-content" Jan 29 07:11:24 crc kubenswrapper[5017]: E0129 07:11:24.461539 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerName="extract-utilities" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.461618 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerName="extract-utilities" Jan 29 07:11:24 crc kubenswrapper[5017]: E0129 07:11:24.461761 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerName="registry-server" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.461839 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerName="registry-server" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.462137 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb122f5-27e0-4d5f-942b-92338d96d165" containerName="registry-server" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.463341 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.496337 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwvmg"] Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.556655 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-catalog-content\") pod \"redhat-operators-dwvmg\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.557116 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz5lk\" (UniqueName: \"kubernetes.io/projected/a886998f-81da-4505-8b3e-521bc6660530-kube-api-access-tz5lk\") pod \"redhat-operators-dwvmg\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.557276 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-utilities\") pod \"redhat-operators-dwvmg\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.659277 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-utilities\") pod \"redhat-operators-dwvmg\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.659426 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-catalog-content\") pod \"redhat-operators-dwvmg\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.659462 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz5lk\" (UniqueName: \"kubernetes.io/projected/a886998f-81da-4505-8b3e-521bc6660530-kube-api-access-tz5lk\") pod \"redhat-operators-dwvmg\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.660287 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-utilities\") pod \"redhat-operators-dwvmg\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.660401 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-catalog-content\") pod \"redhat-operators-dwvmg\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.694139 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz5lk\" (UniqueName: \"kubernetes.io/projected/a886998f-81da-4505-8b3e-521bc6660530-kube-api-access-tz5lk\") pod \"redhat-operators-dwvmg\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:24 crc kubenswrapper[5017]: I0129 07:11:24.813045 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:25 crc kubenswrapper[5017]: I0129 07:11:25.324169 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwvmg"] Jan 29 07:11:26 crc kubenswrapper[5017]: I0129 07:11:26.270109 5017 generic.go:334] "Generic (PLEG): container finished" podID="a886998f-81da-4505-8b3e-521bc6660530" containerID="15a39fba7719dab55825bc43b93ce79390c2cb35865a2e5e005aed1020dc6410" exitCode=0 Jan 29 07:11:26 crc kubenswrapper[5017]: I0129 07:11:26.270203 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwvmg" event={"ID":"a886998f-81da-4505-8b3e-521bc6660530","Type":"ContainerDied","Data":"15a39fba7719dab55825bc43b93ce79390c2cb35865a2e5e005aed1020dc6410"} Jan 29 07:11:26 crc kubenswrapper[5017]: I0129 07:11:26.270530 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwvmg" event={"ID":"a886998f-81da-4505-8b3e-521bc6660530","Type":"ContainerStarted","Data":"f329dbd5f3d8c66d53bb5ea7c2f45439a03d3bde644bc5f6f12230ed52e9b6bf"} Jan 29 07:11:26 crc kubenswrapper[5017]: I0129 07:11:26.538878 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:11:26 crc kubenswrapper[5017]: I0129 07:11:26.539057 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:11:27 crc kubenswrapper[5017]: I0129 07:11:27.281184 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwvmg" event={"ID":"a886998f-81da-4505-8b3e-521bc6660530","Type":"ContainerStarted","Data":"59a69e3b5a1cc0f0f99a61bbeb2f02fd4bbe4cae4bef900732812aa7fa5b11ed"} Jan 29 07:11:28 crc kubenswrapper[5017]: I0129 07:11:28.305890 5017 generic.go:334] "Generic (PLEG): container finished" podID="a886998f-81da-4505-8b3e-521bc6660530" containerID="59a69e3b5a1cc0f0f99a61bbeb2f02fd4bbe4cae4bef900732812aa7fa5b11ed" exitCode=0 Jan 29 07:11:28 crc kubenswrapper[5017]: I0129 07:11:28.305969 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwvmg" event={"ID":"a886998f-81da-4505-8b3e-521bc6660530","Type":"ContainerDied","Data":"59a69e3b5a1cc0f0f99a61bbeb2f02fd4bbe4cae4bef900732812aa7fa5b11ed"} Jan 29 07:11:29 crc kubenswrapper[5017]: I0129 07:11:29.320770 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwvmg" event={"ID":"a886998f-81da-4505-8b3e-521bc6660530","Type":"ContainerStarted","Data":"c4945cebae31fc3555ac8347cd2db35668970faf260a79c0fa4417acc9e958e4"} Jan 29 07:11:29 crc kubenswrapper[5017]: I0129 07:11:29.363166 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dwvmg" podStartSLOduration=2.82772645 podStartE2EDuration="5.363127279s" podCreationTimestamp="2026-01-29 07:11:24 +0000 UTC" firstStartedPulling="2026-01-29 07:11:26.272066195 +0000 UTC m=+2172.646513815" lastFinishedPulling="2026-01-29 07:11:28.807467004 +0000 UTC m=+2175.181914644" observedRunningTime="2026-01-29 07:11:29.354331148 +0000 UTC m=+2175.728778838" watchObservedRunningTime="2026-01-29 07:11:29.363127279 +0000 UTC m=+2175.737574919" Jan 29 07:11:34 crc kubenswrapper[5017]: I0129 07:11:34.813715 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:34 crc kubenswrapper[5017]: I0129 07:11:34.814719 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:35 crc kubenswrapper[5017]: I0129 07:11:35.872709 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dwvmg" podUID="a886998f-81da-4505-8b3e-521bc6660530" containerName="registry-server" probeResult="failure" output=< Jan 29 07:11:35 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 07:11:35 crc kubenswrapper[5017]: > Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.246612 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z2j9w"] Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.249658 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.266878 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z2j9w"] Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.399537 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-catalog-content\") pod \"certified-operators-z2j9w\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.399605 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-utilities\") pod \"certified-operators-z2j9w\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.399663 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p6fn\" (UniqueName: \"kubernetes.io/projected/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-kube-api-access-9p6fn\") pod \"certified-operators-z2j9w\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.505146 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-catalog-content\") pod \"certified-operators-z2j9w\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.505272 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-utilities\") pod \"certified-operators-z2j9w\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.505390 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p6fn\" (UniqueName: \"kubernetes.io/projected/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-kube-api-access-9p6fn\") pod \"certified-operators-z2j9w\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.506610 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-catalog-content\") pod \"certified-operators-z2j9w\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.507079 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-utilities\") pod \"certified-operators-z2j9w\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.562498 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p6fn\" (UniqueName: \"kubernetes.io/projected/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-kube-api-access-9p6fn\") pod \"certified-operators-z2j9w\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:37 crc kubenswrapper[5017]: I0129 07:11:37.586360 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:38 crc kubenswrapper[5017]: W0129 07:11:38.094277 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd71b0ca_c843_4109_8ee6_3c03b6b62d2d.slice/crio-d3fa5f26a6f6e24add4154672c7af173a62cf3d81a7f20c76c7b0e0ae5f87aca WatchSource:0}: Error finding container d3fa5f26a6f6e24add4154672c7af173a62cf3d81a7f20c76c7b0e0ae5f87aca: Status 404 returned error can't find the container with id d3fa5f26a6f6e24add4154672c7af173a62cf3d81a7f20c76c7b0e0ae5f87aca Jan 29 07:11:38 crc kubenswrapper[5017]: I0129 07:11:38.102408 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z2j9w"] Jan 29 07:11:38 crc kubenswrapper[5017]: I0129 07:11:38.400446 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2j9w" event={"ID":"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d","Type":"ContainerStarted","Data":"28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30"} Jan 29 07:11:38 crc kubenswrapper[5017]: I0129 07:11:38.400514 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2j9w" event={"ID":"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d","Type":"ContainerStarted","Data":"d3fa5f26a6f6e24add4154672c7af173a62cf3d81a7f20c76c7b0e0ae5f87aca"} Jan 29 07:11:39 crc kubenswrapper[5017]: I0129 07:11:39.413708 5017 generic.go:334] "Generic (PLEG): container finished" podID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerID="28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30" exitCode=0 Jan 29 07:11:39 crc kubenswrapper[5017]: I0129 07:11:39.413827 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2j9w" event={"ID":"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d","Type":"ContainerDied","Data":"28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30"} Jan 29 07:11:41 crc kubenswrapper[5017]: I0129 07:11:41.437160 5017 generic.go:334] "Generic (PLEG): container finished" podID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerID="4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0" exitCode=0 Jan 29 07:11:41 crc kubenswrapper[5017]: I0129 07:11:41.437293 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2j9w" event={"ID":"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d","Type":"ContainerDied","Data":"4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0"} Jan 29 07:11:42 crc kubenswrapper[5017]: I0129 07:11:42.451425 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2j9w" event={"ID":"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d","Type":"ContainerStarted","Data":"871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c"} Jan 29 07:11:42 crc kubenswrapper[5017]: I0129 07:11:42.482860 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z2j9w" podStartSLOduration=3.054075528 podStartE2EDuration="5.482838654s" podCreationTimestamp="2026-01-29 07:11:37 +0000 UTC" firstStartedPulling="2026-01-29 07:11:39.416102254 +0000 UTC m=+2185.790549884" lastFinishedPulling="2026-01-29 07:11:41.8448654 +0000 UTC m=+2188.219313010" observedRunningTime="2026-01-29 07:11:42.476527412 +0000 UTC m=+2188.850975022" watchObservedRunningTime="2026-01-29 07:11:42.482838654 +0000 UTC m=+2188.857286264" Jan 29 07:11:44 crc kubenswrapper[5017]: I0129 07:11:44.867006 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:44 crc kubenswrapper[5017]: I0129 07:11:44.947613 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:47 crc kubenswrapper[5017]: I0129 07:11:47.621950 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:47 crc kubenswrapper[5017]: I0129 07:11:47.622611 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:47 crc kubenswrapper[5017]: I0129 07:11:47.690502 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:48 crc kubenswrapper[5017]: I0129 07:11:48.590743 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.241837 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dwvmg"] Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.242295 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dwvmg" podUID="a886998f-81da-4505-8b3e-521bc6660530" containerName="registry-server" containerID="cri-o://c4945cebae31fc3555ac8347cd2db35668970faf260a79c0fa4417acc9e958e4" gracePeriod=2 Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.439001 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z2j9w"] Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.521221 5017 generic.go:334] "Generic (PLEG): container finished" podID="a886998f-81da-4505-8b3e-521bc6660530" containerID="c4945cebae31fc3555ac8347cd2db35668970faf260a79c0fa4417acc9e958e4" exitCode=0 Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.521329 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwvmg" event={"ID":"a886998f-81da-4505-8b3e-521bc6660530","Type":"ContainerDied","Data":"c4945cebae31fc3555ac8347cd2db35668970faf260a79c0fa4417acc9e958e4"} Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.761711 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.959676 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-utilities\") pod \"a886998f-81da-4505-8b3e-521bc6660530\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.959834 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz5lk\" (UniqueName: \"kubernetes.io/projected/a886998f-81da-4505-8b3e-521bc6660530-kube-api-access-tz5lk\") pod \"a886998f-81da-4505-8b3e-521bc6660530\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.959858 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-catalog-content\") pod \"a886998f-81da-4505-8b3e-521bc6660530\" (UID: \"a886998f-81da-4505-8b3e-521bc6660530\") " Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.961155 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-utilities" (OuterVolumeSpecName: "utilities") pod "a886998f-81da-4505-8b3e-521bc6660530" (UID: "a886998f-81da-4505-8b3e-521bc6660530"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:11:49 crc kubenswrapper[5017]: I0129 07:11:49.967400 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a886998f-81da-4505-8b3e-521bc6660530-kube-api-access-tz5lk" (OuterVolumeSpecName: "kube-api-access-tz5lk") pod "a886998f-81da-4505-8b3e-521bc6660530" (UID: "a886998f-81da-4505-8b3e-521bc6660530"). InnerVolumeSpecName "kube-api-access-tz5lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.062511 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz5lk\" (UniqueName: \"kubernetes.io/projected/a886998f-81da-4505-8b3e-521bc6660530-kube-api-access-tz5lk\") on node \"crc\" DevicePath \"\"" Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.062570 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.103191 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a886998f-81da-4505-8b3e-521bc6660530" (UID: "a886998f-81da-4505-8b3e-521bc6660530"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.163977 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a886998f-81da-4505-8b3e-521bc6660530-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.534882 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwvmg" Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.534986 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z2j9w" podUID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerName="registry-server" containerID="cri-o://871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c" gracePeriod=2 Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.534866 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwvmg" event={"ID":"a886998f-81da-4505-8b3e-521bc6660530","Type":"ContainerDied","Data":"f329dbd5f3d8c66d53bb5ea7c2f45439a03d3bde644bc5f6f12230ed52e9b6bf"} Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.535465 5017 scope.go:117] "RemoveContainer" containerID="c4945cebae31fc3555ac8347cd2db35668970faf260a79c0fa4417acc9e958e4" Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.569734 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dwvmg"] Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.571475 5017 scope.go:117] "RemoveContainer" containerID="59a69e3b5a1cc0f0f99a61bbeb2f02fd4bbe4cae4bef900732812aa7fa5b11ed" Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.578930 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dwvmg"] Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.591290 5017 scope.go:117] "RemoveContainer" containerID="15a39fba7719dab55825bc43b93ce79390c2cb35865a2e5e005aed1020dc6410" Jan 29 07:11:50 crc kubenswrapper[5017]: I0129 07:11:50.995875 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.180210 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-utilities\") pod \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.180347 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p6fn\" (UniqueName: \"kubernetes.io/projected/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-kube-api-access-9p6fn\") pod \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.180491 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-catalog-content\") pod \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\" (UID: \"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d\") " Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.181244 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-utilities" (OuterVolumeSpecName: "utilities") pod "cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" (UID: "cd71b0ca-c843-4109-8ee6-3c03b6b62d2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.188251 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-kube-api-access-9p6fn" (OuterVolumeSpecName: "kube-api-access-9p6fn") pod "cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" (UID: "cd71b0ca-c843-4109-8ee6-3c03b6b62d2d"). InnerVolumeSpecName "kube-api-access-9p6fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.246180 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" (UID: "cd71b0ca-c843-4109-8ee6-3c03b6b62d2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.282306 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.282352 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.282364 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p6fn\" (UniqueName: \"kubernetes.io/projected/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d-kube-api-access-9p6fn\") on node \"crc\" DevicePath \"\"" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.547685 5017 generic.go:334] "Generic (PLEG): container finished" podID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerID="871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c" exitCode=0 Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.547815 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2j9w" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.547797 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2j9w" event={"ID":"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d","Type":"ContainerDied","Data":"871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c"} Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.548013 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2j9w" event={"ID":"cd71b0ca-c843-4109-8ee6-3c03b6b62d2d","Type":"ContainerDied","Data":"d3fa5f26a6f6e24add4154672c7af173a62cf3d81a7f20c76c7b0e0ae5f87aca"} Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.548066 5017 scope.go:117] "RemoveContainer" containerID="871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.579953 5017 scope.go:117] "RemoveContainer" containerID="4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.615341 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z2j9w"] Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.630985 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z2j9w"] Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.642370 5017 scope.go:117] "RemoveContainer" containerID="28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.694905 5017 scope.go:117] "RemoveContainer" containerID="871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c" Jan 29 07:11:51 crc kubenswrapper[5017]: E0129 07:11:51.695603 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c\": container with ID starting with 871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c not found: ID does not exist" containerID="871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.695666 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c"} err="failed to get container status \"871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c\": rpc error: code = NotFound desc = could not find container \"871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c\": container with ID starting with 871f301952e19c4a5c9556e765941e5eea08067a8fcaa81452edc1dc4b96595c not found: ID does not exist" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.695705 5017 scope.go:117] "RemoveContainer" containerID="4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0" Jan 29 07:11:51 crc kubenswrapper[5017]: E0129 07:11:51.696325 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0\": container with ID starting with 4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0 not found: ID does not exist" containerID="4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.696397 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0"} err="failed to get container status \"4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0\": rpc error: code = NotFound desc = could not find container \"4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0\": container with ID starting with 4e4fbb5b8ae02eea72babb8aaa2699e84e8ecee6ef13ba6caf6b0a71ad403ed0 not found: ID does not exist" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.696442 5017 scope.go:117] "RemoveContainer" containerID="28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30" Jan 29 07:11:51 crc kubenswrapper[5017]: E0129 07:11:51.696889 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30\": container with ID starting with 28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30 not found: ID does not exist" containerID="28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30" Jan 29 07:11:51 crc kubenswrapper[5017]: I0129 07:11:51.696931 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30"} err="failed to get container status \"28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30\": rpc error: code = NotFound desc = could not find container \"28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30\": container with ID starting with 28673e876962087e5c9adc838cba7ec420512d606828189d002891f0db8caf30 not found: ID does not exist" Jan 29 07:11:52 crc kubenswrapper[5017]: I0129 07:11:52.336808 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a886998f-81da-4505-8b3e-521bc6660530" path="/var/lib/kubelet/pods/a886998f-81da-4505-8b3e-521bc6660530/volumes" Jan 29 07:11:52 crc kubenswrapper[5017]: I0129 07:11:52.338120 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" path="/var/lib/kubelet/pods/cd71b0ca-c843-4109-8ee6-3c03b6b62d2d/volumes" Jan 29 07:11:56 crc kubenswrapper[5017]: I0129 07:11:56.539644 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:11:56 crc kubenswrapper[5017]: I0129 07:11:56.540485 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:12:26 crc kubenswrapper[5017]: I0129 07:12:26.539846 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:12:26 crc kubenswrapper[5017]: I0129 07:12:26.541148 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:12:26 crc kubenswrapper[5017]: I0129 07:12:26.541458 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:12:26 crc kubenswrapper[5017]: I0129 07:12:26.542691 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:12:26 crc kubenswrapper[5017]: I0129 07:12:26.542804 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" gracePeriod=600 Jan 29 07:12:26 crc kubenswrapper[5017]: E0129 07:12:26.679632 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:12:26 crc kubenswrapper[5017]: I0129 07:12:26.922317 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" exitCode=0 Jan 29 07:12:26 crc kubenswrapper[5017]: I0129 07:12:26.922382 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7"} Jan 29 07:12:26 crc kubenswrapper[5017]: I0129 07:12:26.922498 5017 scope.go:117] "RemoveContainer" containerID="7a4f393bca20d197057515ffc36d02d4ae48cf3634f3962f9cfd446363d9d028" Jan 29 07:12:26 crc kubenswrapper[5017]: I0129 07:12:26.923736 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:12:26 crc kubenswrapper[5017]: E0129 07:12:26.924732 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:12:42 crc kubenswrapper[5017]: I0129 07:12:42.316664 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:12:42 crc kubenswrapper[5017]: E0129 07:12:42.318010 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:12:54 crc kubenswrapper[5017]: I0129 07:12:54.328592 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:12:54 crc kubenswrapper[5017]: E0129 07:12:54.329954 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:13:07 crc kubenswrapper[5017]: I0129 07:13:07.317498 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:13:07 crc kubenswrapper[5017]: E0129 07:13:07.318646 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:13:19 crc kubenswrapper[5017]: I0129 07:13:19.316395 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:13:19 crc kubenswrapper[5017]: E0129 07:13:19.317419 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:13:32 crc kubenswrapper[5017]: I0129 07:13:32.316712 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:13:32 crc kubenswrapper[5017]: E0129 07:13:32.318133 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:13:45 crc kubenswrapper[5017]: I0129 07:13:45.316006 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:13:45 crc kubenswrapper[5017]: E0129 07:13:45.317252 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:14:00 crc kubenswrapper[5017]: I0129 07:14:00.541201 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:14:00 crc kubenswrapper[5017]: E0129 07:14:00.548484 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:14:15 crc kubenswrapper[5017]: I0129 07:14:15.317620 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:14:15 crc kubenswrapper[5017]: E0129 07:14:15.318659 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:14:28 crc kubenswrapper[5017]: I0129 07:14:28.316146 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:14:28 crc kubenswrapper[5017]: E0129 07:14:28.317303 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:14:40 crc kubenswrapper[5017]: I0129 07:14:40.317037 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:14:40 crc kubenswrapper[5017]: E0129 07:14:40.319573 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:14:55 crc kubenswrapper[5017]: I0129 07:14:55.317629 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:14:55 crc kubenswrapper[5017]: E0129 07:14:55.318713 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.182780 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9"] Jan 29 07:15:00 crc kubenswrapper[5017]: E0129 07:15:00.183805 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a886998f-81da-4505-8b3e-521bc6660530" containerName="extract-content" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.183825 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a886998f-81da-4505-8b3e-521bc6660530" containerName="extract-content" Jan 29 07:15:00 crc kubenswrapper[5017]: E0129 07:15:00.183842 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerName="extract-utilities" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.183854 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerName="extract-utilities" Jan 29 07:15:00 crc kubenswrapper[5017]: E0129 07:15:00.183875 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerName="registry-server" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.183887 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerName="registry-server" Jan 29 07:15:00 crc kubenswrapper[5017]: E0129 07:15:00.183906 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a886998f-81da-4505-8b3e-521bc6660530" containerName="extract-utilities" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.183915 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a886998f-81da-4505-8b3e-521bc6660530" containerName="extract-utilities" Jan 29 07:15:00 crc kubenswrapper[5017]: E0129 07:15:00.183934 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerName="extract-content" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.183943 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerName="extract-content" Jan 29 07:15:00 crc kubenswrapper[5017]: E0129 07:15:00.183989 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a886998f-81da-4505-8b3e-521bc6660530" containerName="registry-server" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.183999 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a886998f-81da-4505-8b3e-521bc6660530" containerName="registry-server" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.184221 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a886998f-81da-4505-8b3e-521bc6660530" containerName="registry-server" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.184252 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd71b0ca-c843-4109-8ee6-3c03b6b62d2d" containerName="registry-server" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.185169 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.190231 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.191033 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.202839 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9"] Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.264610 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slxch\" (UniqueName: \"kubernetes.io/projected/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-kube-api-access-slxch\") pod \"collect-profiles-29494515-69fd9\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.264706 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-config-volume\") pod \"collect-profiles-29494515-69fd9\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.264753 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-secret-volume\") pod \"collect-profiles-29494515-69fd9\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.365990 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slxch\" (UniqueName: \"kubernetes.io/projected/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-kube-api-access-slxch\") pod \"collect-profiles-29494515-69fd9\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.366440 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-config-volume\") pod \"collect-profiles-29494515-69fd9\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.366517 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-secret-volume\") pod \"collect-profiles-29494515-69fd9\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.368461 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-config-volume\") pod \"collect-profiles-29494515-69fd9\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.376390 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-secret-volume\") pod \"collect-profiles-29494515-69fd9\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.385332 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slxch\" (UniqueName: \"kubernetes.io/projected/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-kube-api-access-slxch\") pod \"collect-profiles-29494515-69fd9\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:00 crc kubenswrapper[5017]: I0129 07:15:00.514307 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:01 crc kubenswrapper[5017]: I0129 07:15:01.262883 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9"] Jan 29 07:15:01 crc kubenswrapper[5017]: I0129 07:15:01.386748 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" event={"ID":"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad","Type":"ContainerStarted","Data":"c9c6745037c7ecb5eda9520356c78e3c14c50d254b71bc9f8169764e7f5f84c1"} Jan 29 07:15:02 crc kubenswrapper[5017]: I0129 07:15:02.399580 5017 generic.go:334] "Generic (PLEG): container finished" podID="eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad" containerID="4f41fba4d8083eefcd90bd30b5b67c549a19f0364648d70c3316c65291021b72" exitCode=0 Jan 29 07:15:02 crc kubenswrapper[5017]: I0129 07:15:02.400199 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" event={"ID":"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad","Type":"ContainerDied","Data":"4f41fba4d8083eefcd90bd30b5b67c549a19f0364648d70c3316c65291021b72"} Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.761834 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.826794 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-config-volume\") pod \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.826902 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slxch\" (UniqueName: \"kubernetes.io/projected/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-kube-api-access-slxch\") pod \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.827052 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-secret-volume\") pod \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\" (UID: \"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad\") " Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.828403 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-config-volume" (OuterVolumeSpecName: "config-volume") pod "eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad" (UID: "eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.847374 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-kube-api-access-slxch" (OuterVolumeSpecName: "kube-api-access-slxch") pod "eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad" (UID: "eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad"). InnerVolumeSpecName "kube-api-access-slxch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.847420 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad" (UID: "eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.930067 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.930125 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:15:03 crc kubenswrapper[5017]: I0129 07:15:03.930217 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slxch\" (UniqueName: \"kubernetes.io/projected/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad-kube-api-access-slxch\") on node \"crc\" DevicePath \"\"" Jan 29 07:15:04 crc kubenswrapper[5017]: I0129 07:15:04.420194 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" event={"ID":"eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad","Type":"ContainerDied","Data":"c9c6745037c7ecb5eda9520356c78e3c14c50d254b71bc9f8169764e7f5f84c1"} Jan 29 07:15:04 crc kubenswrapper[5017]: I0129 07:15:04.420243 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c6745037c7ecb5eda9520356c78e3c14c50d254b71bc9f8169764e7f5f84c1" Jan 29 07:15:04 crc kubenswrapper[5017]: I0129 07:15:04.420330 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9" Jan 29 07:15:04 crc kubenswrapper[5017]: I0129 07:15:04.858828 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z"] Jan 29 07:15:04 crc kubenswrapper[5017]: I0129 07:15:04.865202 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494470-2sw2z"] Jan 29 07:15:06 crc kubenswrapper[5017]: I0129 07:15:06.331527 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fbf9f0a-9025-4eca-b0d2-00f87df0c16e" path="/var/lib/kubelet/pods/7fbf9f0a-9025-4eca-b0d2-00f87df0c16e/volumes" Jan 29 07:15:09 crc kubenswrapper[5017]: I0129 07:15:09.316054 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:15:09 crc kubenswrapper[5017]: E0129 07:15:09.316862 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:15:20 crc kubenswrapper[5017]: I0129 07:15:20.316737 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:15:20 crc kubenswrapper[5017]: E0129 07:15:20.318147 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:15:21 crc kubenswrapper[5017]: I0129 07:15:21.862444 5017 scope.go:117] "RemoveContainer" containerID="222f60d0ab5105c4428d853d7b9c05abd56bf70465004e58e5a328e648702c30" Jan 29 07:15:31 crc kubenswrapper[5017]: I0129 07:15:31.316358 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:15:31 crc kubenswrapper[5017]: E0129 07:15:31.317296 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:15:43 crc kubenswrapper[5017]: I0129 07:15:43.316591 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:15:43 crc kubenswrapper[5017]: E0129 07:15:43.317697 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:15:54 crc kubenswrapper[5017]: I0129 07:15:54.320895 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:15:54 crc kubenswrapper[5017]: E0129 07:15:54.321598 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:16:07 crc kubenswrapper[5017]: I0129 07:16:07.316718 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:16:07 crc kubenswrapper[5017]: E0129 07:16:07.318032 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:16:20 crc kubenswrapper[5017]: I0129 07:16:20.316645 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:16:20 crc kubenswrapper[5017]: E0129 07:16:20.317749 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:16:32 crc kubenswrapper[5017]: I0129 07:16:32.317357 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:16:32 crc kubenswrapper[5017]: E0129 07:16:32.318612 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:16:46 crc kubenswrapper[5017]: I0129 07:16:46.316242 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:16:46 crc kubenswrapper[5017]: E0129 07:16:46.317704 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:16:59 crc kubenswrapper[5017]: I0129 07:16:59.318197 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:16:59 crc kubenswrapper[5017]: E0129 07:16:59.319522 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:17:13 crc kubenswrapper[5017]: I0129 07:17:13.317238 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:17:13 crc kubenswrapper[5017]: E0129 07:17:13.318772 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:17:25 crc kubenswrapper[5017]: I0129 07:17:25.317135 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:17:25 crc kubenswrapper[5017]: E0129 07:17:25.318219 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:17:37 crc kubenswrapper[5017]: I0129 07:17:37.316552 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:17:37 crc kubenswrapper[5017]: I0129 07:17:37.852056 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"e98e8e0874f864f0f71914aafdf553d2c7191a20f1bb371e58e6ce39d88cd9c5"} Jan 29 07:19:56 crc kubenswrapper[5017]: I0129 07:19:56.539528 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:19:56 crc kubenswrapper[5017]: I0129 07:19:56.540243 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:20:26 crc kubenswrapper[5017]: I0129 07:20:26.539684 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:20:26 crc kubenswrapper[5017]: I0129 07:20:26.540783 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:20:56 crc kubenswrapper[5017]: I0129 07:20:56.539377 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:20:56 crc kubenswrapper[5017]: I0129 07:20:56.540288 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:20:56 crc kubenswrapper[5017]: I0129 07:20:56.540370 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:20:56 crc kubenswrapper[5017]: I0129 07:20:56.541599 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e98e8e0874f864f0f71914aafdf553d2c7191a20f1bb371e58e6ce39d88cd9c5"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:20:56 crc kubenswrapper[5017]: I0129 07:20:56.541747 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://e98e8e0874f864f0f71914aafdf553d2c7191a20f1bb371e58e6ce39d88cd9c5" gracePeriod=600 Jan 29 07:20:56 crc kubenswrapper[5017]: I0129 07:20:56.738597 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="e98e8e0874f864f0f71914aafdf553d2c7191a20f1bb371e58e6ce39d88cd9c5" exitCode=0 Jan 29 07:20:56 crc kubenswrapper[5017]: I0129 07:20:56.738702 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"e98e8e0874f864f0f71914aafdf553d2c7191a20f1bb371e58e6ce39d88cd9c5"} Jan 29 07:20:56 crc kubenswrapper[5017]: I0129 07:20:56.738870 5017 scope.go:117] "RemoveContainer" containerID="b8c652088a514d7a24ecb0d049d2da5653192a5d4bc3111b294c6a18c62e26d7" Jan 29 07:20:57 crc kubenswrapper[5017]: I0129 07:20:57.748080 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27"} Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.368442 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cz4q9"] Jan 29 07:21:24 crc kubenswrapper[5017]: E0129 07:21:24.369350 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad" containerName="collect-profiles" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.369364 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad" containerName="collect-profiles" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.369513 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad" containerName="collect-profiles" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.370544 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.377455 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cz4q9"] Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.481416 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-catalog-content\") pod \"redhat-marketplace-cz4q9\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.481877 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mns7\" (UniqueName: \"kubernetes.io/projected/c623f2ba-11f1-44e5-956e-d1c4e2693795-kube-api-access-4mns7\") pod \"redhat-marketplace-cz4q9\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.481941 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-utilities\") pod \"redhat-marketplace-cz4q9\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.583290 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-catalog-content\") pod \"redhat-marketplace-cz4q9\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.583393 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mns7\" (UniqueName: \"kubernetes.io/projected/c623f2ba-11f1-44e5-956e-d1c4e2693795-kube-api-access-4mns7\") pod \"redhat-marketplace-cz4q9\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.583832 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-utilities\") pod \"redhat-marketplace-cz4q9\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.584091 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-catalog-content\") pod \"redhat-marketplace-cz4q9\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.584306 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-utilities\") pod \"redhat-marketplace-cz4q9\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.604874 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mns7\" (UniqueName: \"kubernetes.io/projected/c623f2ba-11f1-44e5-956e-d1c4e2693795-kube-api-access-4mns7\") pod \"redhat-marketplace-cz4q9\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:24 crc kubenswrapper[5017]: I0129 07:21:24.691259 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:25 crc kubenswrapper[5017]: I0129 07:21:25.236415 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cz4q9"] Jan 29 07:21:25 crc kubenswrapper[5017]: W0129 07:21:25.244890 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc623f2ba_11f1_44e5_956e_d1c4e2693795.slice/crio-34933ef16ef202b93b4d045bb9f04655c7fa390c1dddea2cc8d03be7a7763cfa WatchSource:0}: Error finding container 34933ef16ef202b93b4d045bb9f04655c7fa390c1dddea2cc8d03be7a7763cfa: Status 404 returned error can't find the container with id 34933ef16ef202b93b4d045bb9f04655c7fa390c1dddea2cc8d03be7a7763cfa Jan 29 07:21:25 crc kubenswrapper[5017]: I0129 07:21:25.991832 5017 generic.go:334] "Generic (PLEG): container finished" podID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerID="4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f" exitCode=0 Jan 29 07:21:25 crc kubenswrapper[5017]: I0129 07:21:25.991912 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cz4q9" event={"ID":"c623f2ba-11f1-44e5-956e-d1c4e2693795","Type":"ContainerDied","Data":"4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f"} Jan 29 07:21:25 crc kubenswrapper[5017]: I0129 07:21:25.992467 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cz4q9" event={"ID":"c623f2ba-11f1-44e5-956e-d1c4e2693795","Type":"ContainerStarted","Data":"34933ef16ef202b93b4d045bb9f04655c7fa390c1dddea2cc8d03be7a7763cfa"} Jan 29 07:21:25 crc kubenswrapper[5017]: I0129 07:21:25.998231 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:21:27 crc kubenswrapper[5017]: I0129 07:21:27.003795 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cz4q9" event={"ID":"c623f2ba-11f1-44e5-956e-d1c4e2693795","Type":"ContainerStarted","Data":"42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c"} Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.015300 5017 generic.go:334] "Generic (PLEG): container finished" podID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerID="42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c" exitCode=0 Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.015387 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cz4q9" event={"ID":"c623f2ba-11f1-44e5-956e-d1c4e2693795","Type":"ContainerDied","Data":"42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c"} Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.015442 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cz4q9" event={"ID":"c623f2ba-11f1-44e5-956e-d1c4e2693795","Type":"ContainerStarted","Data":"94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d"} Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.053322 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cz4q9" podStartSLOduration=2.22706697 podStartE2EDuration="4.053292977s" podCreationTimestamp="2026-01-29 07:21:24 +0000 UTC" firstStartedPulling="2026-01-29 07:21:25.9969204 +0000 UTC m=+2772.371368050" lastFinishedPulling="2026-01-29 07:21:27.823146447 +0000 UTC m=+2774.197594057" observedRunningTime="2026-01-29 07:21:28.045131422 +0000 UTC m=+2774.419579042" watchObservedRunningTime="2026-01-29 07:21:28.053292977 +0000 UTC m=+2774.427740627" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.732174 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2sd7r"] Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.734057 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.768858 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sd7r"] Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.850704 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5ccc\" (UniqueName: \"kubernetes.io/projected/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-kube-api-access-c5ccc\") pod \"redhat-operators-2sd7r\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.850805 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-catalog-content\") pod \"redhat-operators-2sd7r\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.850893 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-utilities\") pod \"redhat-operators-2sd7r\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.952724 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5ccc\" (UniqueName: \"kubernetes.io/projected/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-kube-api-access-c5ccc\") pod \"redhat-operators-2sd7r\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.952814 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-catalog-content\") pod \"redhat-operators-2sd7r\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.952875 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-utilities\") pod \"redhat-operators-2sd7r\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.953433 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-utilities\") pod \"redhat-operators-2sd7r\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.953646 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-catalog-content\") pod \"redhat-operators-2sd7r\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:28 crc kubenswrapper[5017]: I0129 07:21:28.985720 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5ccc\" (UniqueName: \"kubernetes.io/projected/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-kube-api-access-c5ccc\") pod \"redhat-operators-2sd7r\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:29 crc kubenswrapper[5017]: I0129 07:21:29.056299 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:29 crc kubenswrapper[5017]: I0129 07:21:29.502463 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sd7r"] Jan 29 07:21:29 crc kubenswrapper[5017]: W0129 07:21:29.506191 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a543265_a4dc_4bbd_96bc_3d6f0dc52318.slice/crio-e40ef419ebae63c77a4a249409c087fb8a5aed83800825d72e89b175e62ad060 WatchSource:0}: Error finding container e40ef419ebae63c77a4a249409c087fb8a5aed83800825d72e89b175e62ad060: Status 404 returned error can't find the container with id e40ef419ebae63c77a4a249409c087fb8a5aed83800825d72e89b175e62ad060 Jan 29 07:21:30 crc kubenswrapper[5017]: I0129 07:21:30.040756 5017 generic.go:334] "Generic (PLEG): container finished" podID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerID="32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e" exitCode=0 Jan 29 07:21:30 crc kubenswrapper[5017]: I0129 07:21:30.041155 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd7r" event={"ID":"8a543265-a4dc-4bbd-96bc-3d6f0dc52318","Type":"ContainerDied","Data":"32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e"} Jan 29 07:21:30 crc kubenswrapper[5017]: I0129 07:21:30.041189 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd7r" event={"ID":"8a543265-a4dc-4bbd-96bc-3d6f0dc52318","Type":"ContainerStarted","Data":"e40ef419ebae63c77a4a249409c087fb8a5aed83800825d72e89b175e62ad060"} Jan 29 07:21:31 crc kubenswrapper[5017]: I0129 07:21:31.050721 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd7r" event={"ID":"8a543265-a4dc-4bbd-96bc-3d6f0dc52318","Type":"ContainerStarted","Data":"788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b"} Jan 29 07:21:32 crc kubenswrapper[5017]: I0129 07:21:32.060838 5017 generic.go:334] "Generic (PLEG): container finished" podID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerID="788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b" exitCode=0 Jan 29 07:21:32 crc kubenswrapper[5017]: I0129 07:21:32.060905 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd7r" event={"ID":"8a543265-a4dc-4bbd-96bc-3d6f0dc52318","Type":"ContainerDied","Data":"788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b"} Jan 29 07:21:33 crc kubenswrapper[5017]: I0129 07:21:33.069878 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd7r" event={"ID":"8a543265-a4dc-4bbd-96bc-3d6f0dc52318","Type":"ContainerStarted","Data":"d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d"} Jan 29 07:21:33 crc kubenswrapper[5017]: I0129 07:21:33.096366 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2sd7r" podStartSLOduration=2.573909579 podStartE2EDuration="5.096338073s" podCreationTimestamp="2026-01-29 07:21:28 +0000 UTC" firstStartedPulling="2026-01-29 07:21:30.043210071 +0000 UTC m=+2776.417657721" lastFinishedPulling="2026-01-29 07:21:32.565638605 +0000 UTC m=+2778.940086215" observedRunningTime="2026-01-29 07:21:33.089028398 +0000 UTC m=+2779.463476008" watchObservedRunningTime="2026-01-29 07:21:33.096338073 +0000 UTC m=+2779.470785673" Jan 29 07:21:34 crc kubenswrapper[5017]: I0129 07:21:34.691888 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:34 crc kubenswrapper[5017]: I0129 07:21:34.692019 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:34 crc kubenswrapper[5017]: I0129 07:21:34.763715 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:35 crc kubenswrapper[5017]: I0129 07:21:35.152151 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.731119 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wmgs6"] Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.733260 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.769078 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmgs6"] Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.886752 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-catalog-content\") pod \"community-operators-wmgs6\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.887088 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n87qf\" (UniqueName: \"kubernetes.io/projected/dd07cfb9-ce1d-454c-899a-ee264c885160-kube-api-access-n87qf\") pod \"community-operators-wmgs6\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.887331 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-utilities\") pod \"community-operators-wmgs6\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.989480 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-utilities\") pod \"community-operators-wmgs6\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.989684 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-catalog-content\") pod \"community-operators-wmgs6\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.989730 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n87qf\" (UniqueName: \"kubernetes.io/projected/dd07cfb9-ce1d-454c-899a-ee264c885160-kube-api-access-n87qf\") pod \"community-operators-wmgs6\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.989995 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-utilities\") pod \"community-operators-wmgs6\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:36 crc kubenswrapper[5017]: I0129 07:21:36.990267 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-catalog-content\") pod \"community-operators-wmgs6\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.014124 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n87qf\" (UniqueName: \"kubernetes.io/projected/dd07cfb9-ce1d-454c-899a-ee264c885160-kube-api-access-n87qf\") pod \"community-operators-wmgs6\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.077399 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.330259 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cz4q9"] Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.330533 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cz4q9" podUID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerName="registry-server" containerID="cri-o://94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d" gracePeriod=2 Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.591771 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmgs6"] Jan 29 07:21:37 crc kubenswrapper[5017]: W0129 07:21:37.621711 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd07cfb9_ce1d_454c_899a_ee264c885160.slice/crio-3d39d6685f99f03453bd4336743ee44acb5f0c6a4db5eb0d8844f206a61c5a58 WatchSource:0}: Error finding container 3d39d6685f99f03453bd4336743ee44acb5f0c6a4db5eb0d8844f206a61c5a58: Status 404 returned error can't find the container with id 3d39d6685f99f03453bd4336743ee44acb5f0c6a4db5eb0d8844f206a61c5a58 Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.768913 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.902381 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mns7\" (UniqueName: \"kubernetes.io/projected/c623f2ba-11f1-44e5-956e-d1c4e2693795-kube-api-access-4mns7\") pod \"c623f2ba-11f1-44e5-956e-d1c4e2693795\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.902472 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-catalog-content\") pod \"c623f2ba-11f1-44e5-956e-d1c4e2693795\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.902695 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-utilities\") pod \"c623f2ba-11f1-44e5-956e-d1c4e2693795\" (UID: \"c623f2ba-11f1-44e5-956e-d1c4e2693795\") " Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.903799 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-utilities" (OuterVolumeSpecName: "utilities") pod "c623f2ba-11f1-44e5-956e-d1c4e2693795" (UID: "c623f2ba-11f1-44e5-956e-d1c4e2693795"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.908231 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c623f2ba-11f1-44e5-956e-d1c4e2693795-kube-api-access-4mns7" (OuterVolumeSpecName: "kube-api-access-4mns7") pod "c623f2ba-11f1-44e5-956e-d1c4e2693795" (UID: "c623f2ba-11f1-44e5-956e-d1c4e2693795"). InnerVolumeSpecName "kube-api-access-4mns7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:21:37 crc kubenswrapper[5017]: I0129 07:21:37.925670 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c623f2ba-11f1-44e5-956e-d1c4e2693795" (UID: "c623f2ba-11f1-44e5-956e-d1c4e2693795"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.004354 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.004430 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mns7\" (UniqueName: \"kubernetes.io/projected/c623f2ba-11f1-44e5-956e-d1c4e2693795-kube-api-access-4mns7\") on node \"crc\" DevicePath \"\"" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.004447 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c623f2ba-11f1-44e5-956e-d1c4e2693795-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.116268 5017 generic.go:334] "Generic (PLEG): container finished" podID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerID="94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d" exitCode=0 Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.116344 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cz4q9" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.116348 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cz4q9" event={"ID":"c623f2ba-11f1-44e5-956e-d1c4e2693795","Type":"ContainerDied","Data":"94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d"} Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.116819 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cz4q9" event={"ID":"c623f2ba-11f1-44e5-956e-d1c4e2693795","Type":"ContainerDied","Data":"34933ef16ef202b93b4d045bb9f04655c7fa390c1dddea2cc8d03be7a7763cfa"} Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.116860 5017 scope.go:117] "RemoveContainer" containerID="94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.118845 5017 generic.go:334] "Generic (PLEG): container finished" podID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerID="1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457" exitCode=0 Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.118886 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmgs6" event={"ID":"dd07cfb9-ce1d-454c-899a-ee264c885160","Type":"ContainerDied","Data":"1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457"} Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.118902 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmgs6" event={"ID":"dd07cfb9-ce1d-454c-899a-ee264c885160","Type":"ContainerStarted","Data":"3d39d6685f99f03453bd4336743ee44acb5f0c6a4db5eb0d8844f206a61c5a58"} Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.151354 5017 scope.go:117] "RemoveContainer" containerID="42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.178357 5017 scope.go:117] "RemoveContainer" containerID="4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.199219 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cz4q9"] Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.206535 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cz4q9"] Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.215378 5017 scope.go:117] "RemoveContainer" containerID="94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d" Jan 29 07:21:38 crc kubenswrapper[5017]: E0129 07:21:38.216156 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d\": container with ID starting with 94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d not found: ID does not exist" containerID="94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.216207 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d"} err="failed to get container status \"94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d\": rpc error: code = NotFound desc = could not find container \"94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d\": container with ID starting with 94815d2b5827c7d805952c16037c9da9334f595730df1e67b9df1e4126b6860d not found: ID does not exist" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.216246 5017 scope.go:117] "RemoveContainer" containerID="42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c" Jan 29 07:21:38 crc kubenswrapper[5017]: E0129 07:21:38.216701 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c\": container with ID starting with 42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c not found: ID does not exist" containerID="42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.216732 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c"} err="failed to get container status \"42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c\": rpc error: code = NotFound desc = could not find container \"42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c\": container with ID starting with 42bb9fc79b5129c5187049cf5fb89a71b6b4c3f560cb03e1af1345f34c96d45c not found: ID does not exist" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.216746 5017 scope.go:117] "RemoveContainer" containerID="4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f" Jan 29 07:21:38 crc kubenswrapper[5017]: E0129 07:21:38.217072 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f\": container with ID starting with 4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f not found: ID does not exist" containerID="4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.217100 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f"} err="failed to get container status \"4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f\": rpc error: code = NotFound desc = could not find container \"4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f\": container with ID starting with 4301b625c90e2f0235b4663320172ce881a40d8c5ad2eee149d17a75b409d59f not found: ID does not exist" Jan 29 07:21:38 crc kubenswrapper[5017]: I0129 07:21:38.325744 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c623f2ba-11f1-44e5-956e-d1c4e2693795" path="/var/lib/kubelet/pods/c623f2ba-11f1-44e5-956e-d1c4e2693795/volumes" Jan 29 07:21:39 crc kubenswrapper[5017]: I0129 07:21:39.056522 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:39 crc kubenswrapper[5017]: I0129 07:21:39.056604 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:40 crc kubenswrapper[5017]: I0129 07:21:40.115201 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2sd7r" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerName="registry-server" probeResult="failure" output=< Jan 29 07:21:40 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 07:21:40 crc kubenswrapper[5017]: > Jan 29 07:21:40 crc kubenswrapper[5017]: I0129 07:21:40.137431 5017 generic.go:334] "Generic (PLEG): container finished" podID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerID="d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a" exitCode=0 Jan 29 07:21:40 crc kubenswrapper[5017]: I0129 07:21:40.137496 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmgs6" event={"ID":"dd07cfb9-ce1d-454c-899a-ee264c885160","Type":"ContainerDied","Data":"d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a"} Jan 29 07:21:41 crc kubenswrapper[5017]: I0129 07:21:41.145981 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmgs6" event={"ID":"dd07cfb9-ce1d-454c-899a-ee264c885160","Type":"ContainerStarted","Data":"8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023"} Jan 29 07:21:41 crc kubenswrapper[5017]: I0129 07:21:41.174340 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wmgs6" podStartSLOduration=2.676641945 podStartE2EDuration="5.174321046s" podCreationTimestamp="2026-01-29 07:21:36 +0000 UTC" firstStartedPulling="2026-01-29 07:21:38.121227604 +0000 UTC m=+2784.495675214" lastFinishedPulling="2026-01-29 07:21:40.618906705 +0000 UTC m=+2786.993354315" observedRunningTime="2026-01-29 07:21:41.172158194 +0000 UTC m=+2787.546605804" watchObservedRunningTime="2026-01-29 07:21:41.174321046 +0000 UTC m=+2787.548768656" Jan 29 07:21:47 crc kubenswrapper[5017]: I0129 07:21:47.078071 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:47 crc kubenswrapper[5017]: I0129 07:21:47.078860 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:47 crc kubenswrapper[5017]: I0129 07:21:47.138051 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:47 crc kubenswrapper[5017]: I0129 07:21:47.246748 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:47 crc kubenswrapper[5017]: I0129 07:21:47.390086 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmgs6"] Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.108153 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.148463 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.210591 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wmgs6" podUID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerName="registry-server" containerID="cri-o://8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023" gracePeriod=2 Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.708326 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.788716 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sd7r"] Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.804513 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-catalog-content\") pod \"dd07cfb9-ce1d-454c-899a-ee264c885160\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.804805 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-utilities\") pod \"dd07cfb9-ce1d-454c-899a-ee264c885160\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.804984 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n87qf\" (UniqueName: \"kubernetes.io/projected/dd07cfb9-ce1d-454c-899a-ee264c885160-kube-api-access-n87qf\") pod \"dd07cfb9-ce1d-454c-899a-ee264c885160\" (UID: \"dd07cfb9-ce1d-454c-899a-ee264c885160\") " Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.807338 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-utilities" (OuterVolumeSpecName: "utilities") pod "dd07cfb9-ce1d-454c-899a-ee264c885160" (UID: "dd07cfb9-ce1d-454c-899a-ee264c885160"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.813779 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd07cfb9-ce1d-454c-899a-ee264c885160-kube-api-access-n87qf" (OuterVolumeSpecName: "kube-api-access-n87qf") pod "dd07cfb9-ce1d-454c-899a-ee264c885160" (UID: "dd07cfb9-ce1d-454c-899a-ee264c885160"). InnerVolumeSpecName "kube-api-access-n87qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.867873 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd07cfb9-ce1d-454c-899a-ee264c885160" (UID: "dd07cfb9-ce1d-454c-899a-ee264c885160"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.907341 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.907391 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n87qf\" (UniqueName: \"kubernetes.io/projected/dd07cfb9-ce1d-454c-899a-ee264c885160-kube-api-access-n87qf\") on node \"crc\" DevicePath \"\"" Jan 29 07:21:49 crc kubenswrapper[5017]: I0129 07:21:49.907405 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd07cfb9-ce1d-454c-899a-ee264c885160-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.224291 5017 generic.go:334] "Generic (PLEG): container finished" podID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerID="8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023" exitCode=0 Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.224361 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmgs6" event={"ID":"dd07cfb9-ce1d-454c-899a-ee264c885160","Type":"ContainerDied","Data":"8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023"} Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.224411 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmgs6" event={"ID":"dd07cfb9-ce1d-454c-899a-ee264c885160","Type":"ContainerDied","Data":"3d39d6685f99f03453bd4336743ee44acb5f0c6a4db5eb0d8844f206a61c5a58"} Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.224439 5017 scope.go:117] "RemoveContainer" containerID="8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.224446 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmgs6" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.224636 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2sd7r" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerName="registry-server" containerID="cri-o://d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d" gracePeriod=2 Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.265163 5017 scope.go:117] "RemoveContainer" containerID="d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.271093 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmgs6"] Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.277465 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wmgs6"] Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.298105 5017 scope.go:117] "RemoveContainer" containerID="1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.325885 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd07cfb9-ce1d-454c-899a-ee264c885160" path="/var/lib/kubelet/pods/dd07cfb9-ce1d-454c-899a-ee264c885160/volumes" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.425072 5017 scope.go:117] "RemoveContainer" containerID="8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023" Jan 29 07:21:50 crc kubenswrapper[5017]: E0129 07:21:50.427529 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023\": container with ID starting with 8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023 not found: ID does not exist" containerID="8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.427564 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023"} err="failed to get container status \"8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023\": rpc error: code = NotFound desc = could not find container \"8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023\": container with ID starting with 8334427d046e2619c7fe213aed61e4bb2fb547fcc5c9f6e1f89aec7eb2916023 not found: ID does not exist" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.427590 5017 scope.go:117] "RemoveContainer" containerID="d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a" Jan 29 07:21:50 crc kubenswrapper[5017]: E0129 07:21:50.428127 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a\": container with ID starting with d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a not found: ID does not exist" containerID="d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.428151 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a"} err="failed to get container status \"d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a\": rpc error: code = NotFound desc = could not find container \"d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a\": container with ID starting with d6fa7035fa1d2f7f49c35a0218a28638a0d825216e20d915725a7dd2c4e62a3a not found: ID does not exist" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.428165 5017 scope.go:117] "RemoveContainer" containerID="1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457" Jan 29 07:21:50 crc kubenswrapper[5017]: E0129 07:21:50.428398 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457\": container with ID starting with 1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457 not found: ID does not exist" containerID="1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.428419 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457"} err="failed to get container status \"1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457\": rpc error: code = NotFound desc = could not find container \"1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457\": container with ID starting with 1e36b6ff1b3e14fde108d8b81b8f5d666466fc1557380b2141004ae003724457 not found: ID does not exist" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.725448 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.819344 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-catalog-content\") pod \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.819679 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-utilities\") pod \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.820745 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-utilities" (OuterVolumeSpecName: "utilities") pod "8a543265-a4dc-4bbd-96bc-3d6f0dc52318" (UID: "8a543265-a4dc-4bbd-96bc-3d6f0dc52318"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.820795 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5ccc\" (UniqueName: \"kubernetes.io/projected/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-kube-api-access-c5ccc\") pod \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\" (UID: \"8a543265-a4dc-4bbd-96bc-3d6f0dc52318\") " Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.821181 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.824676 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-kube-api-access-c5ccc" (OuterVolumeSpecName: "kube-api-access-c5ccc") pod "8a543265-a4dc-4bbd-96bc-3d6f0dc52318" (UID: "8a543265-a4dc-4bbd-96bc-3d6f0dc52318"). InnerVolumeSpecName "kube-api-access-c5ccc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.922343 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5ccc\" (UniqueName: \"kubernetes.io/projected/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-kube-api-access-c5ccc\") on node \"crc\" DevicePath \"\"" Jan 29 07:21:50 crc kubenswrapper[5017]: I0129 07:21:50.948807 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a543265-a4dc-4bbd-96bc-3d6f0dc52318" (UID: "8a543265-a4dc-4bbd-96bc-3d6f0dc52318"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.024638 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a543265-a4dc-4bbd-96bc-3d6f0dc52318-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.234501 5017 generic.go:334] "Generic (PLEG): container finished" podID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerID="d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d" exitCode=0 Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.234541 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd7r" event={"ID":"8a543265-a4dc-4bbd-96bc-3d6f0dc52318","Type":"ContainerDied","Data":"d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d"} Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.234570 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd7r" event={"ID":"8a543265-a4dc-4bbd-96bc-3d6f0dc52318","Type":"ContainerDied","Data":"e40ef419ebae63c77a4a249409c087fb8a5aed83800825d72e89b175e62ad060"} Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.234594 5017 scope.go:117] "RemoveContainer" containerID="d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.234709 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd7r" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.275276 5017 scope.go:117] "RemoveContainer" containerID="788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.279751 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sd7r"] Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.288754 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2sd7r"] Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.299589 5017 scope.go:117] "RemoveContainer" containerID="32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.328204 5017 scope.go:117] "RemoveContainer" containerID="d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d" Jan 29 07:21:51 crc kubenswrapper[5017]: E0129 07:21:51.328892 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d\": container with ID starting with d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d not found: ID does not exist" containerID="d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.328981 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d"} err="failed to get container status \"d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d\": rpc error: code = NotFound desc = could not find container \"d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d\": container with ID starting with d62babdc9f8a74faba51db90bde6f077ee7256be7853479ea372a6a69003197d not found: ID does not exist" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.329027 5017 scope.go:117] "RemoveContainer" containerID="788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b" Jan 29 07:21:51 crc kubenswrapper[5017]: E0129 07:21:51.329531 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b\": container with ID starting with 788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b not found: ID does not exist" containerID="788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.329577 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b"} err="failed to get container status \"788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b\": rpc error: code = NotFound desc = could not find container \"788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b\": container with ID starting with 788af00eed28a5a46eed363a3eac1c7775258293a460144432a33187d2fa438b not found: ID does not exist" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.329607 5017 scope.go:117] "RemoveContainer" containerID="32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e" Jan 29 07:21:51 crc kubenswrapper[5017]: E0129 07:21:51.330179 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e\": container with ID starting with 32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e not found: ID does not exist" containerID="32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e" Jan 29 07:21:51 crc kubenswrapper[5017]: I0129 07:21:51.330219 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e"} err="failed to get container status \"32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e\": rpc error: code = NotFound desc = could not find container \"32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e\": container with ID starting with 32b7cdb9396b86ee5184b9decc682c2c42af1f282c397157a69544ed100fb65e not found: ID does not exist" Jan 29 07:21:52 crc kubenswrapper[5017]: I0129 07:21:52.328362 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" path="/var/lib/kubelet/pods/8a543265-a4dc-4bbd-96bc-3d6f0dc52318/volumes" Jan 29 07:22:56 crc kubenswrapper[5017]: I0129 07:22:56.539659 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:22:56 crc kubenswrapper[5017]: I0129 07:22:56.540650 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.805785 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bn6n6"] Jan 29 07:22:59 crc kubenswrapper[5017]: E0129 07:22:59.806868 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerName="registry-server" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.806888 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerName="registry-server" Jan 29 07:22:59 crc kubenswrapper[5017]: E0129 07:22:59.806908 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerName="extract-content" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.806915 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerName="extract-content" Jan 29 07:22:59 crc kubenswrapper[5017]: E0129 07:22:59.806931 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerName="registry-server" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.806939 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerName="registry-server" Jan 29 07:22:59 crc kubenswrapper[5017]: E0129 07:22:59.806994 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerName="extract-utilities" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.807004 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerName="extract-utilities" Jan 29 07:22:59 crc kubenswrapper[5017]: E0129 07:22:59.807048 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerName="extract-content" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.807055 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerName="extract-content" Jan 29 07:22:59 crc kubenswrapper[5017]: E0129 07:22:59.807067 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerName="extract-utilities" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.807075 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerName="extract-utilities" Jan 29 07:22:59 crc kubenswrapper[5017]: E0129 07:22:59.807086 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerName="extract-utilities" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.807092 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerName="extract-utilities" Jan 29 07:22:59 crc kubenswrapper[5017]: E0129 07:22:59.807101 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerName="registry-server" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.807107 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerName="registry-server" Jan 29 07:22:59 crc kubenswrapper[5017]: E0129 07:22:59.807119 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerName="extract-content" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.807125 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerName="extract-content" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.807317 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd07cfb9-ce1d-454c-899a-ee264c885160" containerName="registry-server" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.807332 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c623f2ba-11f1-44e5-956e-d1c4e2693795" containerName="registry-server" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.807350 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a543265-a4dc-4bbd-96bc-3d6f0dc52318" containerName="registry-server" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.808662 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.823386 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn6n6"] Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.997535 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-utilities\") pod \"certified-operators-bn6n6\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.997653 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx2tw\" (UniqueName: \"kubernetes.io/projected/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-kube-api-access-qx2tw\") pod \"certified-operators-bn6n6\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:22:59 crc kubenswrapper[5017]: I0129 07:22:59.997704 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-catalog-content\") pod \"certified-operators-bn6n6\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.099421 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-utilities\") pod \"certified-operators-bn6n6\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.099497 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx2tw\" (UniqueName: \"kubernetes.io/projected/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-kube-api-access-qx2tw\") pod \"certified-operators-bn6n6\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.099544 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-catalog-content\") pod \"certified-operators-bn6n6\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.100094 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-utilities\") pod \"certified-operators-bn6n6\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.100483 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-catalog-content\") pod \"certified-operators-bn6n6\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.154719 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx2tw\" (UniqueName: \"kubernetes.io/projected/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-kube-api-access-qx2tw\") pod \"certified-operators-bn6n6\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.433073 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.743760 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn6n6"] Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.978219 5017 generic.go:334] "Generic (PLEG): container finished" podID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerID="89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91" exitCode=0 Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.978294 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6n6" event={"ID":"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9","Type":"ContainerDied","Data":"89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91"} Jan 29 07:23:00 crc kubenswrapper[5017]: I0129 07:23:00.978343 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6n6" event={"ID":"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9","Type":"ContainerStarted","Data":"f716b85fef94f64b128ef547aedb44e0273064280618e5c08de5e07069728f8d"} Jan 29 07:23:01 crc kubenswrapper[5017]: I0129 07:23:01.991617 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6n6" event={"ID":"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9","Type":"ContainerStarted","Data":"93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd"} Jan 29 07:23:03 crc kubenswrapper[5017]: I0129 07:23:03.002344 5017 generic.go:334] "Generic (PLEG): container finished" podID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerID="93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd" exitCode=0 Jan 29 07:23:03 crc kubenswrapper[5017]: I0129 07:23:03.002421 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6n6" event={"ID":"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9","Type":"ContainerDied","Data":"93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd"} Jan 29 07:23:04 crc kubenswrapper[5017]: I0129 07:23:04.010806 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6n6" event={"ID":"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9","Type":"ContainerStarted","Data":"3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039"} Jan 29 07:23:04 crc kubenswrapper[5017]: I0129 07:23:04.030212 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bn6n6" podStartSLOduration=2.581569286 podStartE2EDuration="5.030194991s" podCreationTimestamp="2026-01-29 07:22:59 +0000 UTC" firstStartedPulling="2026-01-29 07:23:00.980467911 +0000 UTC m=+2867.354915531" lastFinishedPulling="2026-01-29 07:23:03.429093626 +0000 UTC m=+2869.803541236" observedRunningTime="2026-01-29 07:23:04.026936993 +0000 UTC m=+2870.401384613" watchObservedRunningTime="2026-01-29 07:23:04.030194991 +0000 UTC m=+2870.404642601" Jan 29 07:23:10 crc kubenswrapper[5017]: I0129 07:23:10.433881 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:10 crc kubenswrapper[5017]: I0129 07:23:10.434553 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:10 crc kubenswrapper[5017]: I0129 07:23:10.480343 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:11 crc kubenswrapper[5017]: I0129 07:23:11.143731 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:11 crc kubenswrapper[5017]: I0129 07:23:11.211450 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn6n6"] Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.103493 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bn6n6" podUID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerName="registry-server" containerID="cri-o://3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039" gracePeriod=2 Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.646112 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.674341 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-utilities\") pod \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.674568 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-catalog-content\") pod \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.675154 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx2tw\" (UniqueName: \"kubernetes.io/projected/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-kube-api-access-qx2tw\") pod \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\" (UID: \"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9\") " Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.675912 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-utilities" (OuterVolumeSpecName: "utilities") pod "fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" (UID: "fab78c83-e3dd-4ff6-92bb-820b1c8efbd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.685079 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-kube-api-access-qx2tw" (OuterVolumeSpecName: "kube-api-access-qx2tw") pod "fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" (UID: "fab78c83-e3dd-4ff6-92bb-820b1c8efbd9"). InnerVolumeSpecName "kube-api-access-qx2tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.743865 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" (UID: "fab78c83-e3dd-4ff6-92bb-820b1c8efbd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.777862 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.777909 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:23:13 crc kubenswrapper[5017]: I0129 07:23:13.777923 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx2tw\" (UniqueName: \"kubernetes.io/projected/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9-kube-api-access-qx2tw\") on node \"crc\" DevicePath \"\"" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.117695 5017 generic.go:334] "Generic (PLEG): container finished" podID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerID="3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039" exitCode=0 Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.117755 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6n6" event={"ID":"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9","Type":"ContainerDied","Data":"3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039"} Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.117833 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn6n6" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.117899 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6n6" event={"ID":"fab78c83-e3dd-4ff6-92bb-820b1c8efbd9","Type":"ContainerDied","Data":"f716b85fef94f64b128ef547aedb44e0273064280618e5c08de5e07069728f8d"} Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.117988 5017 scope.go:117] "RemoveContainer" containerID="3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.149169 5017 scope.go:117] "RemoveContainer" containerID="93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.172910 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn6n6"] Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.182788 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bn6n6"] Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.189457 5017 scope.go:117] "RemoveContainer" containerID="89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.214005 5017 scope.go:117] "RemoveContainer" containerID="3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039" Jan 29 07:23:14 crc kubenswrapper[5017]: E0129 07:23:14.214901 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039\": container with ID starting with 3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039 not found: ID does not exist" containerID="3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.214990 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039"} err="failed to get container status \"3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039\": rpc error: code = NotFound desc = could not find container \"3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039\": container with ID starting with 3f3a2449aea2de644e4290d6eff2b7568b57827c5f90c50d6c62ddb21c6c3039 not found: ID does not exist" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.215034 5017 scope.go:117] "RemoveContainer" containerID="93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd" Jan 29 07:23:14 crc kubenswrapper[5017]: E0129 07:23:14.215465 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd\": container with ID starting with 93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd not found: ID does not exist" containerID="93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.215509 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd"} err="failed to get container status \"93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd\": rpc error: code = NotFound desc = could not find container \"93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd\": container with ID starting with 93a46969466db783f86fae5ad16deaa9fb97ba4ef07da4a7646f196ccf7402fd not found: ID does not exist" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.215533 5017 scope.go:117] "RemoveContainer" containerID="89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91" Jan 29 07:23:14 crc kubenswrapper[5017]: E0129 07:23:14.216127 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91\": container with ID starting with 89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91 not found: ID does not exist" containerID="89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.217218 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91"} err="failed to get container status \"89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91\": rpc error: code = NotFound desc = could not find container \"89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91\": container with ID starting with 89831cfb6238dd498e1a9e46f83e8fa124ba75587eb6fa3d27e4145de4931e91 not found: ID does not exist" Jan 29 07:23:14 crc kubenswrapper[5017]: I0129 07:23:14.334896 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" path="/var/lib/kubelet/pods/fab78c83-e3dd-4ff6-92bb-820b1c8efbd9/volumes" Jan 29 07:23:26 crc kubenswrapper[5017]: I0129 07:23:26.539706 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:23:26 crc kubenswrapper[5017]: I0129 07:23:26.540492 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:23:56 crc kubenswrapper[5017]: I0129 07:23:56.538855 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:23:56 crc kubenswrapper[5017]: I0129 07:23:56.539586 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:23:56 crc kubenswrapper[5017]: I0129 07:23:56.539646 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:23:56 crc kubenswrapper[5017]: I0129 07:23:56.540581 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:23:56 crc kubenswrapper[5017]: I0129 07:23:56.540651 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" gracePeriod=600 Jan 29 07:23:56 crc kubenswrapper[5017]: E0129 07:23:56.674040 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:23:57 crc kubenswrapper[5017]: I0129 07:23:57.547017 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" exitCode=0 Jan 29 07:23:57 crc kubenswrapper[5017]: I0129 07:23:57.547056 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27"} Jan 29 07:23:57 crc kubenswrapper[5017]: I0129 07:23:57.547595 5017 scope.go:117] "RemoveContainer" containerID="e98e8e0874f864f0f71914aafdf553d2c7191a20f1bb371e58e6ce39d88cd9c5" Jan 29 07:23:57 crc kubenswrapper[5017]: I0129 07:23:57.548345 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:23:57 crc kubenswrapper[5017]: E0129 07:23:57.548639 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:24:10 crc kubenswrapper[5017]: I0129 07:24:10.316231 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:24:10 crc kubenswrapper[5017]: E0129 07:24:10.318877 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:24:21 crc kubenswrapper[5017]: I0129 07:24:21.317712 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:24:21 crc kubenswrapper[5017]: E0129 07:24:21.319145 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:24:32 crc kubenswrapper[5017]: I0129 07:24:32.317613 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:24:32 crc kubenswrapper[5017]: E0129 07:24:32.318915 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:24:46 crc kubenswrapper[5017]: I0129 07:24:46.317028 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:24:46 crc kubenswrapper[5017]: E0129 07:24:46.318267 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:25:01 crc kubenswrapper[5017]: I0129 07:25:01.316877 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:25:01 crc kubenswrapper[5017]: E0129 07:25:01.318518 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:25:15 crc kubenswrapper[5017]: I0129 07:25:15.316836 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:25:15 crc kubenswrapper[5017]: E0129 07:25:15.318262 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:25:26 crc kubenswrapper[5017]: I0129 07:25:26.319954 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:25:26 crc kubenswrapper[5017]: E0129 07:25:26.321344 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:25:38 crc kubenswrapper[5017]: I0129 07:25:38.316494 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:25:38 crc kubenswrapper[5017]: E0129 07:25:38.317536 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:25:50 crc kubenswrapper[5017]: I0129 07:25:50.316619 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:25:50 crc kubenswrapper[5017]: E0129 07:25:50.318141 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:26:03 crc kubenswrapper[5017]: I0129 07:26:03.319181 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:26:03 crc kubenswrapper[5017]: E0129 07:26:03.322268 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:26:17 crc kubenswrapper[5017]: I0129 07:26:17.316773 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:26:17 crc kubenswrapper[5017]: E0129 07:26:17.318305 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:26:28 crc kubenswrapper[5017]: I0129 07:26:28.317086 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:26:28 crc kubenswrapper[5017]: E0129 07:26:28.319633 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:26:41 crc kubenswrapper[5017]: I0129 07:26:41.316556 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:26:41 crc kubenswrapper[5017]: E0129 07:26:41.317556 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:26:56 crc kubenswrapper[5017]: I0129 07:26:56.319111 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:26:56 crc kubenswrapper[5017]: E0129 07:26:56.320320 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:27:07 crc kubenswrapper[5017]: I0129 07:27:07.316903 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:27:07 crc kubenswrapper[5017]: E0129 07:27:07.318205 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:27:21 crc kubenswrapper[5017]: I0129 07:27:21.316347 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:27:21 crc kubenswrapper[5017]: E0129 07:27:21.317360 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:27:34 crc kubenswrapper[5017]: I0129 07:27:34.324938 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:27:34 crc kubenswrapper[5017]: E0129 07:27:34.326729 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:27:45 crc kubenswrapper[5017]: I0129 07:27:45.317253 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:27:45 crc kubenswrapper[5017]: E0129 07:27:45.318105 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:27:57 crc kubenswrapper[5017]: I0129 07:27:57.316041 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:27:57 crc kubenswrapper[5017]: E0129 07:27:57.319726 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:28:09 crc kubenswrapper[5017]: I0129 07:28:09.316404 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:28:09 crc kubenswrapper[5017]: E0129 07:28:09.317578 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:28:23 crc kubenswrapper[5017]: I0129 07:28:23.316894 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:28:23 crc kubenswrapper[5017]: E0129 07:28:23.318392 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:28:36 crc kubenswrapper[5017]: I0129 07:28:36.317243 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:28:36 crc kubenswrapper[5017]: E0129 07:28:36.318429 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:28:48 crc kubenswrapper[5017]: I0129 07:28:48.316560 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:28:48 crc kubenswrapper[5017]: E0129 07:28:48.317890 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:28:59 crc kubenswrapper[5017]: I0129 07:28:59.316145 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:28:59 crc kubenswrapper[5017]: I0129 07:28:59.804647 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"98c58df4df387cb44119e0981b72137ad25fa21293135d12b782bb9ac47f771f"} Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.177230 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5"] Jan 29 07:30:00 crc kubenswrapper[5017]: E0129 07:30:00.180014 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerName="extract-utilities" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.180051 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerName="extract-utilities" Jan 29 07:30:00 crc kubenswrapper[5017]: E0129 07:30:00.180067 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerName="extract-content" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.180075 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerName="extract-content" Jan 29 07:30:00 crc kubenswrapper[5017]: E0129 07:30:00.180117 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerName="registry-server" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.180123 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerName="registry-server" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.180987 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab78c83-e3dd-4ff6-92bb-820b1c8efbd9" containerName="registry-server" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.181807 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.186675 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.192694 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.202890 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5"] Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.834728 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f138d5aa-7b32-4869-b1b4-a65a12f430fc-config-volume\") pod \"collect-profiles-29494530-9czx5\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.835399 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg7db\" (UniqueName: \"kubernetes.io/projected/f138d5aa-7b32-4869-b1b4-a65a12f430fc-kube-api-access-tg7db\") pod \"collect-profiles-29494530-9czx5\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.837879 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f138d5aa-7b32-4869-b1b4-a65a12f430fc-secret-volume\") pod \"collect-profiles-29494530-9czx5\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.940100 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f138d5aa-7b32-4869-b1b4-a65a12f430fc-config-volume\") pod \"collect-profiles-29494530-9czx5\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.940200 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg7db\" (UniqueName: \"kubernetes.io/projected/f138d5aa-7b32-4869-b1b4-a65a12f430fc-kube-api-access-tg7db\") pod \"collect-profiles-29494530-9czx5\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.940269 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f138d5aa-7b32-4869-b1b4-a65a12f430fc-secret-volume\") pod \"collect-profiles-29494530-9czx5\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.941481 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f138d5aa-7b32-4869-b1b4-a65a12f430fc-config-volume\") pod \"collect-profiles-29494530-9czx5\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.948250 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f138d5aa-7b32-4869-b1b4-a65a12f430fc-secret-volume\") pod \"collect-profiles-29494530-9czx5\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:00 crc kubenswrapper[5017]: I0129 07:30:00.963229 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg7db\" (UniqueName: \"kubernetes.io/projected/f138d5aa-7b32-4869-b1b4-a65a12f430fc-kube-api-access-tg7db\") pod \"collect-profiles-29494530-9czx5\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:01 crc kubenswrapper[5017]: I0129 07:30:01.109656 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:01 crc kubenswrapper[5017]: I0129 07:30:01.565431 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5"] Jan 29 07:30:01 crc kubenswrapper[5017]: I0129 07:30:01.855784 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" event={"ID":"f138d5aa-7b32-4869-b1b4-a65a12f430fc","Type":"ContainerStarted","Data":"440a13c69bdd26fba6ace80ffe30f3168b17ae7aeba4259a9d5cc462d363690a"} Jan 29 07:30:01 crc kubenswrapper[5017]: I0129 07:30:01.856443 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" event={"ID":"f138d5aa-7b32-4869-b1b4-a65a12f430fc","Type":"ContainerStarted","Data":"ef6f4f411d099914d1761acedc361f34c05972e4d1a749c092e5b610fba3b3be"} Jan 29 07:30:01 crc kubenswrapper[5017]: I0129 07:30:01.874058 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" podStartSLOduration=1.8740351039999998 podStartE2EDuration="1.874035104s" podCreationTimestamp="2026-01-29 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:30:01.872035116 +0000 UTC m=+3288.246482726" watchObservedRunningTime="2026-01-29 07:30:01.874035104 +0000 UTC m=+3288.248482714" Jan 29 07:30:02 crc kubenswrapper[5017]: I0129 07:30:02.865316 5017 generic.go:334] "Generic (PLEG): container finished" podID="f138d5aa-7b32-4869-b1b4-a65a12f430fc" containerID="440a13c69bdd26fba6ace80ffe30f3168b17ae7aeba4259a9d5cc462d363690a" exitCode=0 Jan 29 07:30:02 crc kubenswrapper[5017]: I0129 07:30:02.865383 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" event={"ID":"f138d5aa-7b32-4869-b1b4-a65a12f430fc","Type":"ContainerDied","Data":"440a13c69bdd26fba6ace80ffe30f3168b17ae7aeba4259a9d5cc462d363690a"} Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.216415 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.302610 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f138d5aa-7b32-4869-b1b4-a65a12f430fc-secret-volume\") pod \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.302702 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg7db\" (UniqueName: \"kubernetes.io/projected/f138d5aa-7b32-4869-b1b4-a65a12f430fc-kube-api-access-tg7db\") pod \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.302773 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f138d5aa-7b32-4869-b1b4-a65a12f430fc-config-volume\") pod \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\" (UID: \"f138d5aa-7b32-4869-b1b4-a65a12f430fc\") " Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.304194 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f138d5aa-7b32-4869-b1b4-a65a12f430fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "f138d5aa-7b32-4869-b1b4-a65a12f430fc" (UID: "f138d5aa-7b32-4869-b1b4-a65a12f430fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.311874 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f138d5aa-7b32-4869-b1b4-a65a12f430fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f138d5aa-7b32-4869-b1b4-a65a12f430fc" (UID: "f138d5aa-7b32-4869-b1b4-a65a12f430fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.312089 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f138d5aa-7b32-4869-b1b4-a65a12f430fc-kube-api-access-tg7db" (OuterVolumeSpecName: "kube-api-access-tg7db") pod "f138d5aa-7b32-4869-b1b4-a65a12f430fc" (UID: "f138d5aa-7b32-4869-b1b4-a65a12f430fc"). InnerVolumeSpecName "kube-api-access-tg7db". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.404790 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f138d5aa-7b32-4869-b1b4-a65a12f430fc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.404858 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg7db\" (UniqueName: \"kubernetes.io/projected/f138d5aa-7b32-4869-b1b4-a65a12f430fc-kube-api-access-tg7db\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.404877 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f138d5aa-7b32-4869-b1b4-a65a12f430fc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.661523 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc"] Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.670824 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-pqldc"] Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.892084 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" event={"ID":"f138d5aa-7b32-4869-b1b4-a65a12f430fc","Type":"ContainerDied","Data":"ef6f4f411d099914d1761acedc361f34c05972e4d1a749c092e5b610fba3b3be"} Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.892164 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5" Jan 29 07:30:04 crc kubenswrapper[5017]: I0129 07:30:04.892171 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6f4f411d099914d1761acedc361f34c05972e4d1a749c092e5b610fba3b3be" Jan 29 07:30:06 crc kubenswrapper[5017]: I0129 07:30:06.327606 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb349dfb-25a6-4a65-b09e-c237a5369ea2" path="/var/lib/kubelet/pods/cb349dfb-25a6-4a65-b09e-c237a5369ea2/volumes" Jan 29 07:30:22 crc kubenswrapper[5017]: I0129 07:30:22.260849 5017 scope.go:117] "RemoveContainer" containerID="cf0045d22bb998156763c4ec75b918062dddf28012b2e870b5e5a811dd76bb2b" Jan 29 07:31:26 crc kubenswrapper[5017]: I0129 07:31:26.539253 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:31:26 crc kubenswrapper[5017]: I0129 07:31:26.540094 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:31:48 crc kubenswrapper[5017]: I0129 07:31:48.963087 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7b474"] Jan 29 07:31:48 crc kubenswrapper[5017]: E0129 07:31:48.964444 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f138d5aa-7b32-4869-b1b4-a65a12f430fc" containerName="collect-profiles" Jan 29 07:31:48 crc kubenswrapper[5017]: I0129 07:31:48.964464 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f138d5aa-7b32-4869-b1b4-a65a12f430fc" containerName="collect-profiles" Jan 29 07:31:48 crc kubenswrapper[5017]: I0129 07:31:48.964699 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f138d5aa-7b32-4869-b1b4-a65a12f430fc" containerName="collect-profiles" Jan 29 07:31:48 crc kubenswrapper[5017]: I0129 07:31:48.966233 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:48 crc kubenswrapper[5017]: I0129 07:31:48.980889 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b474"] Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.104180 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-catalog-content\") pod \"redhat-marketplace-7b474\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.104288 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrqfp\" (UniqueName: \"kubernetes.io/projected/7849275c-aa8e-434b-a908-b3db12987b83-kube-api-access-jrqfp\") pod \"redhat-marketplace-7b474\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.104508 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-utilities\") pod \"redhat-marketplace-7b474\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.206505 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-utilities\") pod \"redhat-marketplace-7b474\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.206650 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-catalog-content\") pod \"redhat-marketplace-7b474\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.206681 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrqfp\" (UniqueName: \"kubernetes.io/projected/7849275c-aa8e-434b-a908-b3db12987b83-kube-api-access-jrqfp\") pod \"redhat-marketplace-7b474\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.207417 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-utilities\") pod \"redhat-marketplace-7b474\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.207522 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-catalog-content\") pod \"redhat-marketplace-7b474\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.231126 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrqfp\" (UniqueName: \"kubernetes.io/projected/7849275c-aa8e-434b-a908-b3db12987b83-kube-api-access-jrqfp\") pod \"redhat-marketplace-7b474\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.298701 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.599237 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b474"] Jan 29 07:31:49 crc kubenswrapper[5017]: W0129 07:31:49.604633 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7849275c_aa8e_434b_a908_b3db12987b83.slice/crio-695321a6d59fbe5f9bf83d00a63d3e5e9ca5f12b3034d93af7104d8cbb5702bf WatchSource:0}: Error finding container 695321a6d59fbe5f9bf83d00a63d3e5e9ca5f12b3034d93af7104d8cbb5702bf: Status 404 returned error can't find the container with id 695321a6d59fbe5f9bf83d00a63d3e5e9ca5f12b3034d93af7104d8cbb5702bf Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.825583 5017 generic.go:334] "Generic (PLEG): container finished" podID="7849275c-aa8e-434b-a908-b3db12987b83" containerID="1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321" exitCode=0 Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.825663 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b474" event={"ID":"7849275c-aa8e-434b-a908-b3db12987b83","Type":"ContainerDied","Data":"1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321"} Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.826073 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b474" event={"ID":"7849275c-aa8e-434b-a908-b3db12987b83","Type":"ContainerStarted","Data":"695321a6d59fbe5f9bf83d00a63d3e5e9ca5f12b3034d93af7104d8cbb5702bf"} Jan 29 07:31:49 crc kubenswrapper[5017]: I0129 07:31:49.828388 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:31:50 crc kubenswrapper[5017]: I0129 07:31:50.838713 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b474" event={"ID":"7849275c-aa8e-434b-a908-b3db12987b83","Type":"ContainerDied","Data":"2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21"} Jan 29 07:31:50 crc kubenswrapper[5017]: I0129 07:31:50.838617 5017 generic.go:334] "Generic (PLEG): container finished" podID="7849275c-aa8e-434b-a908-b3db12987b83" containerID="2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21" exitCode=0 Jan 29 07:31:51 crc kubenswrapper[5017]: I0129 07:31:51.857262 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b474" event={"ID":"7849275c-aa8e-434b-a908-b3db12987b83","Type":"ContainerStarted","Data":"ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e"} Jan 29 07:31:51 crc kubenswrapper[5017]: I0129 07:31:51.899563 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7b474" podStartSLOduration=2.456605849 podStartE2EDuration="3.899539887s" podCreationTimestamp="2026-01-29 07:31:48 +0000 UTC" firstStartedPulling="2026-01-29 07:31:49.828040809 +0000 UTC m=+3396.202488419" lastFinishedPulling="2026-01-29 07:31:51.270974847 +0000 UTC m=+3397.645422457" observedRunningTime="2026-01-29 07:31:51.896445572 +0000 UTC m=+3398.270893192" watchObservedRunningTime="2026-01-29 07:31:51.899539887 +0000 UTC m=+3398.273987507" Jan 29 07:31:56 crc kubenswrapper[5017]: I0129 07:31:56.538826 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:31:56 crc kubenswrapper[5017]: I0129 07:31:56.539275 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:31:59 crc kubenswrapper[5017]: I0129 07:31:59.299842 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:59 crc kubenswrapper[5017]: I0129 07:31:59.300248 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:59 crc kubenswrapper[5017]: I0129 07:31:59.358335 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:31:59 crc kubenswrapper[5017]: I0129 07:31:59.982598 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:32:00 crc kubenswrapper[5017]: I0129 07:32:00.033427 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b474"] Jan 29 07:32:01 crc kubenswrapper[5017]: I0129 07:32:01.942599 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7b474" podUID="7849275c-aa8e-434b-a908-b3db12987b83" containerName="registry-server" containerID="cri-o://ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e" gracePeriod=2 Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.381555 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.451816 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-utilities\") pod \"7849275c-aa8e-434b-a908-b3db12987b83\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.451936 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-catalog-content\") pod \"7849275c-aa8e-434b-a908-b3db12987b83\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.452038 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrqfp\" (UniqueName: \"kubernetes.io/projected/7849275c-aa8e-434b-a908-b3db12987b83-kube-api-access-jrqfp\") pod \"7849275c-aa8e-434b-a908-b3db12987b83\" (UID: \"7849275c-aa8e-434b-a908-b3db12987b83\") " Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.453769 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-utilities" (OuterVolumeSpecName: "utilities") pod "7849275c-aa8e-434b-a908-b3db12987b83" (UID: "7849275c-aa8e-434b-a908-b3db12987b83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.460697 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7849275c-aa8e-434b-a908-b3db12987b83-kube-api-access-jrqfp" (OuterVolumeSpecName: "kube-api-access-jrqfp") pod "7849275c-aa8e-434b-a908-b3db12987b83" (UID: "7849275c-aa8e-434b-a908-b3db12987b83"). InnerVolumeSpecName "kube-api-access-jrqfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.477369 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7849275c-aa8e-434b-a908-b3db12987b83" (UID: "7849275c-aa8e-434b-a908-b3db12987b83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.554108 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.554143 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7849275c-aa8e-434b-a908-b3db12987b83-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.554156 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrqfp\" (UniqueName: \"kubernetes.io/projected/7849275c-aa8e-434b-a908-b3db12987b83-kube-api-access-jrqfp\") on node \"crc\" DevicePath \"\"" Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.955650 5017 generic.go:334] "Generic (PLEG): container finished" podID="7849275c-aa8e-434b-a908-b3db12987b83" containerID="ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e" exitCode=0 Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.955722 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b474" event={"ID":"7849275c-aa8e-434b-a908-b3db12987b83","Type":"ContainerDied","Data":"ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e"} Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.955782 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7b474" Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.955824 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7b474" event={"ID":"7849275c-aa8e-434b-a908-b3db12987b83","Type":"ContainerDied","Data":"695321a6d59fbe5f9bf83d00a63d3e5e9ca5f12b3034d93af7104d8cbb5702bf"} Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.955858 5017 scope.go:117] "RemoveContainer" containerID="ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e" Jan 29 07:32:02 crc kubenswrapper[5017]: I0129 07:32:02.980684 5017 scope.go:117] "RemoveContainer" containerID="2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21" Jan 29 07:32:03 crc kubenswrapper[5017]: I0129 07:32:03.005392 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b474"] Jan 29 07:32:03 crc kubenswrapper[5017]: I0129 07:32:03.011406 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7b474"] Jan 29 07:32:03 crc kubenswrapper[5017]: I0129 07:32:03.028842 5017 scope.go:117] "RemoveContainer" containerID="1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321" Jan 29 07:32:03 crc kubenswrapper[5017]: I0129 07:32:03.057539 5017 scope.go:117] "RemoveContainer" containerID="ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e" Jan 29 07:32:03 crc kubenswrapper[5017]: E0129 07:32:03.058333 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e\": container with ID starting with ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e not found: ID does not exist" containerID="ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e" Jan 29 07:32:03 crc kubenswrapper[5017]: I0129 07:32:03.058417 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e"} err="failed to get container status \"ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e\": rpc error: code = NotFound desc = could not find container \"ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e\": container with ID starting with ffdf4a0f0219b77ca8d646691d8b05d253f58f2fa5fb2f8fa8c4a6afc35d737e not found: ID does not exist" Jan 29 07:32:03 crc kubenswrapper[5017]: I0129 07:32:03.058465 5017 scope.go:117] "RemoveContainer" containerID="2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21" Jan 29 07:32:03 crc kubenswrapper[5017]: E0129 07:32:03.059030 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21\": container with ID starting with 2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21 not found: ID does not exist" containerID="2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21" Jan 29 07:32:03 crc kubenswrapper[5017]: I0129 07:32:03.059082 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21"} err="failed to get container status \"2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21\": rpc error: code = NotFound desc = could not find container \"2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21\": container with ID starting with 2786c822e8005ec9b50be896b06493b99bcf160c1e798e4d2b497fc9673d3f21 not found: ID does not exist" Jan 29 07:32:03 crc kubenswrapper[5017]: I0129 07:32:03.059153 5017 scope.go:117] "RemoveContainer" containerID="1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321" Jan 29 07:32:03 crc kubenswrapper[5017]: E0129 07:32:03.059614 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321\": container with ID starting with 1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321 not found: ID does not exist" containerID="1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321" Jan 29 07:32:03 crc kubenswrapper[5017]: I0129 07:32:03.059643 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321"} err="failed to get container status \"1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321\": rpc error: code = NotFound desc = could not find container \"1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321\": container with ID starting with 1aa2006a42424d87376acd691eca10f163edd82bc55113e8884cc629100b2321 not found: ID does not exist" Jan 29 07:32:04 crc kubenswrapper[5017]: I0129 07:32:04.330932 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7849275c-aa8e-434b-a908-b3db12987b83" path="/var/lib/kubelet/pods/7849275c-aa8e-434b-a908-b3db12987b83/volumes" Jan 29 07:32:26 crc kubenswrapper[5017]: I0129 07:32:26.539510 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:32:26 crc kubenswrapper[5017]: I0129 07:32:26.540407 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:32:26 crc kubenswrapper[5017]: I0129 07:32:26.540485 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:32:26 crc kubenswrapper[5017]: I0129 07:32:26.541648 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98c58df4df387cb44119e0981b72137ad25fa21293135d12b782bb9ac47f771f"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:32:26 crc kubenswrapper[5017]: I0129 07:32:26.541733 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://98c58df4df387cb44119e0981b72137ad25fa21293135d12b782bb9ac47f771f" gracePeriod=600 Jan 29 07:32:27 crc kubenswrapper[5017]: I0129 07:32:27.177600 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="98c58df4df387cb44119e0981b72137ad25fa21293135d12b782bb9ac47f771f" exitCode=0 Jan 29 07:32:27 crc kubenswrapper[5017]: I0129 07:32:27.177717 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"98c58df4df387cb44119e0981b72137ad25fa21293135d12b782bb9ac47f771f"} Jan 29 07:32:27 crc kubenswrapper[5017]: I0129 07:32:27.178677 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5"} Jan 29 07:32:27 crc kubenswrapper[5017]: I0129 07:32:27.178713 5017 scope.go:117] "RemoveContainer" containerID="eda95f7e260185a6ade0c5035863826083f02ef304688c7ab249e9487bc47a27" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.547868 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dr2dw"] Jan 29 07:32:39 crc kubenswrapper[5017]: E0129 07:32:39.549810 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7849275c-aa8e-434b-a908-b3db12987b83" containerName="extract-content" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.549891 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7849275c-aa8e-434b-a908-b3db12987b83" containerName="extract-content" Jan 29 07:32:39 crc kubenswrapper[5017]: E0129 07:32:39.550522 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7849275c-aa8e-434b-a908-b3db12987b83" containerName="registry-server" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.550607 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7849275c-aa8e-434b-a908-b3db12987b83" containerName="registry-server" Jan 29 07:32:39 crc kubenswrapper[5017]: E0129 07:32:39.550698 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7849275c-aa8e-434b-a908-b3db12987b83" containerName="extract-utilities" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.550778 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7849275c-aa8e-434b-a908-b3db12987b83" containerName="extract-utilities" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.551064 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7849275c-aa8e-434b-a908-b3db12987b83" containerName="registry-server" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.552263 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.554878 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dr2dw"] Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.609214 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-catalog-content\") pod \"community-operators-dr2dw\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.609287 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggdqs\" (UniqueName: \"kubernetes.io/projected/ae699d67-c99d-407a-b414-50a86da3d045-kube-api-access-ggdqs\") pod \"community-operators-dr2dw\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.609368 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-utilities\") pod \"community-operators-dr2dw\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.711243 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-catalog-content\") pod \"community-operators-dr2dw\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.711312 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggdqs\" (UniqueName: \"kubernetes.io/projected/ae699d67-c99d-407a-b414-50a86da3d045-kube-api-access-ggdqs\") pod \"community-operators-dr2dw\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.711361 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-utilities\") pod \"community-operators-dr2dw\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.711875 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-catalog-content\") pod \"community-operators-dr2dw\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.712079 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-utilities\") pod \"community-operators-dr2dw\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.737849 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggdqs\" (UniqueName: \"kubernetes.io/projected/ae699d67-c99d-407a-b414-50a86da3d045-kube-api-access-ggdqs\") pod \"community-operators-dr2dw\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:39 crc kubenswrapper[5017]: I0129 07:32:39.877164 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:40 crc kubenswrapper[5017]: I0129 07:32:40.432194 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dr2dw"] Jan 29 07:32:41 crc kubenswrapper[5017]: I0129 07:32:41.341709 5017 generic.go:334] "Generic (PLEG): container finished" podID="ae699d67-c99d-407a-b414-50a86da3d045" containerID="e3f68cc8aff8987b539f54f394a6deed5524a8ec4e7f5399ddbebd4258f5062c" exitCode=0 Jan 29 07:32:41 crc kubenswrapper[5017]: I0129 07:32:41.341890 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr2dw" event={"ID":"ae699d67-c99d-407a-b414-50a86da3d045","Type":"ContainerDied","Data":"e3f68cc8aff8987b539f54f394a6deed5524a8ec4e7f5399ddbebd4258f5062c"} Jan 29 07:32:41 crc kubenswrapper[5017]: I0129 07:32:41.342342 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr2dw" event={"ID":"ae699d67-c99d-407a-b414-50a86da3d045","Type":"ContainerStarted","Data":"ed428aa69373c7bea47981541570399789228dfe7b61b771f7634313d64dcddf"} Jan 29 07:32:41 crc kubenswrapper[5017]: I0129 07:32:41.937751 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccgdz"] Jan 29 07:32:41 crc kubenswrapper[5017]: I0129 07:32:41.939644 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:41 crc kubenswrapper[5017]: I0129 07:32:41.965742 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccgdz"] Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.045815 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-catalog-content\") pod \"redhat-operators-ccgdz\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.045949 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-utilities\") pod \"redhat-operators-ccgdz\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.046001 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrsk9\" (UniqueName: \"kubernetes.io/projected/fae38964-01c4-45f6-9761-80d52b27d664-kube-api-access-wrsk9\") pod \"redhat-operators-ccgdz\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.148375 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-catalog-content\") pod \"redhat-operators-ccgdz\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.148478 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-utilities\") pod \"redhat-operators-ccgdz\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.148523 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrsk9\" (UniqueName: \"kubernetes.io/projected/fae38964-01c4-45f6-9761-80d52b27d664-kube-api-access-wrsk9\") pod \"redhat-operators-ccgdz\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.149150 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-catalog-content\") pod \"redhat-operators-ccgdz\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.149242 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-utilities\") pod \"redhat-operators-ccgdz\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.172290 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrsk9\" (UniqueName: \"kubernetes.io/projected/fae38964-01c4-45f6-9761-80d52b27d664-kube-api-access-wrsk9\") pod \"redhat-operators-ccgdz\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.275344 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.360950 5017 generic.go:334] "Generic (PLEG): container finished" podID="ae699d67-c99d-407a-b414-50a86da3d045" containerID="e71a8adb466daf0ce54816f5707107f48fbcfbce227fbb210a1be7ab51b541b3" exitCode=0 Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.361200 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr2dw" event={"ID":"ae699d67-c99d-407a-b414-50a86da3d045","Type":"ContainerDied","Data":"e71a8adb466daf0ce54816f5707107f48fbcfbce227fbb210a1be7ab51b541b3"} Jan 29 07:32:42 crc kubenswrapper[5017]: I0129 07:32:42.783610 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccgdz"] Jan 29 07:32:43 crc kubenswrapper[5017]: I0129 07:32:43.369014 5017 generic.go:334] "Generic (PLEG): container finished" podID="fae38964-01c4-45f6-9761-80d52b27d664" containerID="7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c" exitCode=0 Jan 29 07:32:43 crc kubenswrapper[5017]: I0129 07:32:43.369099 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgdz" event={"ID":"fae38964-01c4-45f6-9761-80d52b27d664","Type":"ContainerDied","Data":"7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c"} Jan 29 07:32:43 crc kubenswrapper[5017]: I0129 07:32:43.370928 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgdz" event={"ID":"fae38964-01c4-45f6-9761-80d52b27d664","Type":"ContainerStarted","Data":"1295baeae8ad358324ec597a676fa758de567b7227f08c84d7b7894f2994cbfb"} Jan 29 07:32:43 crc kubenswrapper[5017]: I0129 07:32:43.376881 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr2dw" event={"ID":"ae699d67-c99d-407a-b414-50a86da3d045","Type":"ContainerStarted","Data":"7959007233ea9638581924da153eb53e50558dd0087b69e1955b666a9596b830"} Jan 29 07:32:43 crc kubenswrapper[5017]: I0129 07:32:43.414387 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dr2dw" podStartSLOduration=2.933862904 podStartE2EDuration="4.414358017s" podCreationTimestamp="2026-01-29 07:32:39 +0000 UTC" firstStartedPulling="2026-01-29 07:32:41.344037449 +0000 UTC m=+3447.718485059" lastFinishedPulling="2026-01-29 07:32:42.824532562 +0000 UTC m=+3449.198980172" observedRunningTime="2026-01-29 07:32:43.407351743 +0000 UTC m=+3449.781799363" watchObservedRunningTime="2026-01-29 07:32:43.414358017 +0000 UTC m=+3449.788805617" Jan 29 07:32:44 crc kubenswrapper[5017]: I0129 07:32:44.387014 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgdz" event={"ID":"fae38964-01c4-45f6-9761-80d52b27d664","Type":"ContainerStarted","Data":"5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf"} Jan 29 07:32:45 crc kubenswrapper[5017]: I0129 07:32:45.397623 5017 generic.go:334] "Generic (PLEG): container finished" podID="fae38964-01c4-45f6-9761-80d52b27d664" containerID="5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf" exitCode=0 Jan 29 07:32:45 crc kubenswrapper[5017]: I0129 07:32:45.397678 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgdz" event={"ID":"fae38964-01c4-45f6-9761-80d52b27d664","Type":"ContainerDied","Data":"5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf"} Jan 29 07:32:46 crc kubenswrapper[5017]: I0129 07:32:46.407212 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgdz" event={"ID":"fae38964-01c4-45f6-9761-80d52b27d664","Type":"ContainerStarted","Data":"f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d"} Jan 29 07:32:46 crc kubenswrapper[5017]: I0129 07:32:46.430764 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccgdz" podStartSLOduration=2.917029247 podStartE2EDuration="5.430745367s" podCreationTimestamp="2026-01-29 07:32:41 +0000 UTC" firstStartedPulling="2026-01-29 07:32:43.370750719 +0000 UTC m=+3449.745198339" lastFinishedPulling="2026-01-29 07:32:45.884466819 +0000 UTC m=+3452.258914459" observedRunningTime="2026-01-29 07:32:46.42726175 +0000 UTC m=+3452.801709360" watchObservedRunningTime="2026-01-29 07:32:46.430745367 +0000 UTC m=+3452.805192977" Jan 29 07:32:49 crc kubenswrapper[5017]: I0129 07:32:49.877582 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:49 crc kubenswrapper[5017]: I0129 07:32:49.879289 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:49 crc kubenswrapper[5017]: I0129 07:32:49.930223 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:50 crc kubenswrapper[5017]: I0129 07:32:50.501579 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:50 crc kubenswrapper[5017]: I0129 07:32:50.562390 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dr2dw"] Jan 29 07:32:52 crc kubenswrapper[5017]: I0129 07:32:52.276361 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:52 crc kubenswrapper[5017]: I0129 07:32:52.276418 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:32:52 crc kubenswrapper[5017]: I0129 07:32:52.452410 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dr2dw" podUID="ae699d67-c99d-407a-b414-50a86da3d045" containerName="registry-server" containerID="cri-o://7959007233ea9638581924da153eb53e50558dd0087b69e1955b666a9596b830" gracePeriod=2 Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.343427 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ccgdz" podUID="fae38964-01c4-45f6-9761-80d52b27d664" containerName="registry-server" probeResult="failure" output=< Jan 29 07:32:53 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 07:32:53 crc kubenswrapper[5017]: > Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.485141 5017 generic.go:334] "Generic (PLEG): container finished" podID="ae699d67-c99d-407a-b414-50a86da3d045" containerID="7959007233ea9638581924da153eb53e50558dd0087b69e1955b666a9596b830" exitCode=0 Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.485216 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr2dw" event={"ID":"ae699d67-c99d-407a-b414-50a86da3d045","Type":"ContainerDied","Data":"7959007233ea9638581924da153eb53e50558dd0087b69e1955b666a9596b830"} Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.485292 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dr2dw" event={"ID":"ae699d67-c99d-407a-b414-50a86da3d045","Type":"ContainerDied","Data":"ed428aa69373c7bea47981541570399789228dfe7b61b771f7634313d64dcddf"} Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.485309 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed428aa69373c7bea47981541570399789228dfe7b61b771f7634313d64dcddf" Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.501437 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.681002 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggdqs\" (UniqueName: \"kubernetes.io/projected/ae699d67-c99d-407a-b414-50a86da3d045-kube-api-access-ggdqs\") pod \"ae699d67-c99d-407a-b414-50a86da3d045\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.681062 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-utilities\") pod \"ae699d67-c99d-407a-b414-50a86da3d045\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.681143 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-catalog-content\") pod \"ae699d67-c99d-407a-b414-50a86da3d045\" (UID: \"ae699d67-c99d-407a-b414-50a86da3d045\") " Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.682842 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-utilities" (OuterVolumeSpecName: "utilities") pod "ae699d67-c99d-407a-b414-50a86da3d045" (UID: "ae699d67-c99d-407a-b414-50a86da3d045"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.689751 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae699d67-c99d-407a-b414-50a86da3d045-kube-api-access-ggdqs" (OuterVolumeSpecName: "kube-api-access-ggdqs") pod "ae699d67-c99d-407a-b414-50a86da3d045" (UID: "ae699d67-c99d-407a-b414-50a86da3d045"). InnerVolumeSpecName "kube-api-access-ggdqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.746361 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae699d67-c99d-407a-b414-50a86da3d045" (UID: "ae699d67-c99d-407a-b414-50a86da3d045"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.783358 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggdqs\" (UniqueName: \"kubernetes.io/projected/ae699d67-c99d-407a-b414-50a86da3d045-kube-api-access-ggdqs\") on node \"crc\" DevicePath \"\"" Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.783428 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:32:53 crc kubenswrapper[5017]: I0129 07:32:53.783441 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae699d67-c99d-407a-b414-50a86da3d045-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:32:54 crc kubenswrapper[5017]: I0129 07:32:54.492054 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dr2dw" Jan 29 07:32:54 crc kubenswrapper[5017]: I0129 07:32:54.521804 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dr2dw"] Jan 29 07:32:54 crc kubenswrapper[5017]: I0129 07:32:54.529799 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dr2dw"] Jan 29 07:32:56 crc kubenswrapper[5017]: I0129 07:32:56.327742 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae699d67-c99d-407a-b414-50a86da3d045" path="/var/lib/kubelet/pods/ae699d67-c99d-407a-b414-50a86da3d045/volumes" Jan 29 07:33:02 crc kubenswrapper[5017]: I0129 07:33:02.333305 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:33:02 crc kubenswrapper[5017]: I0129 07:33:02.386450 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:33:02 crc kubenswrapper[5017]: I0129 07:33:02.576125 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccgdz"] Jan 29 07:33:03 crc kubenswrapper[5017]: I0129 07:33:03.576709 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ccgdz" podUID="fae38964-01c4-45f6-9761-80d52b27d664" containerName="registry-server" containerID="cri-o://f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d" gracePeriod=2 Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.058179 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.154559 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrsk9\" (UniqueName: \"kubernetes.io/projected/fae38964-01c4-45f6-9761-80d52b27d664-kube-api-access-wrsk9\") pod \"fae38964-01c4-45f6-9761-80d52b27d664\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.154731 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-utilities\") pod \"fae38964-01c4-45f6-9761-80d52b27d664\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.154825 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-catalog-content\") pod \"fae38964-01c4-45f6-9761-80d52b27d664\" (UID: \"fae38964-01c4-45f6-9761-80d52b27d664\") " Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.156004 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-utilities" (OuterVolumeSpecName: "utilities") pod "fae38964-01c4-45f6-9761-80d52b27d664" (UID: "fae38964-01c4-45f6-9761-80d52b27d664"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.167309 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae38964-01c4-45f6-9761-80d52b27d664-kube-api-access-wrsk9" (OuterVolumeSpecName: "kube-api-access-wrsk9") pod "fae38964-01c4-45f6-9761-80d52b27d664" (UID: "fae38964-01c4-45f6-9761-80d52b27d664"). InnerVolumeSpecName "kube-api-access-wrsk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.256874 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.256917 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrsk9\" (UniqueName: \"kubernetes.io/projected/fae38964-01c4-45f6-9761-80d52b27d664-kube-api-access-wrsk9\") on node \"crc\" DevicePath \"\"" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.302282 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fae38964-01c4-45f6-9761-80d52b27d664" (UID: "fae38964-01c4-45f6-9761-80d52b27d664"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.358141 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae38964-01c4-45f6-9761-80d52b27d664-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.589158 5017 generic.go:334] "Generic (PLEG): container finished" podID="fae38964-01c4-45f6-9761-80d52b27d664" containerID="f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d" exitCode=0 Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.589231 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgdz" event={"ID":"fae38964-01c4-45f6-9761-80d52b27d664","Type":"ContainerDied","Data":"f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d"} Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.589313 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgdz" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.589311 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgdz" event={"ID":"fae38964-01c4-45f6-9761-80d52b27d664","Type":"ContainerDied","Data":"1295baeae8ad358324ec597a676fa758de567b7227f08c84d7b7894f2994cbfb"} Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.589387 5017 scope.go:117] "RemoveContainer" containerID="f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.637358 5017 scope.go:117] "RemoveContainer" containerID="5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.647088 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccgdz"] Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.654883 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ccgdz"] Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.709160 5017 scope.go:117] "RemoveContainer" containerID="7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.764289 5017 scope.go:117] "RemoveContainer" containerID="f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d" Jan 29 07:33:04 crc kubenswrapper[5017]: E0129 07:33:04.764904 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d\": container with ID starting with f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d not found: ID does not exist" containerID="f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.764947 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d"} err="failed to get container status \"f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d\": rpc error: code = NotFound desc = could not find container \"f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d\": container with ID starting with f911186cab70ad3ca52b9a9ab81678abebb90f68541e3237442cd39ed950b30d not found: ID does not exist" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.764987 5017 scope.go:117] "RemoveContainer" containerID="5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf" Jan 29 07:33:04 crc kubenswrapper[5017]: E0129 07:33:04.765456 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf\": container with ID starting with 5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf not found: ID does not exist" containerID="5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.765518 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf"} err="failed to get container status \"5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf\": rpc error: code = NotFound desc = could not find container \"5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf\": container with ID starting with 5e1aefe3d1472e32b5a7083ceb25b040cf60ef98ebdf9ccd968a64449722c1cf not found: ID does not exist" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.765549 5017 scope.go:117] "RemoveContainer" containerID="7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c" Jan 29 07:33:04 crc kubenswrapper[5017]: E0129 07:33:04.766163 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c\": container with ID starting with 7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c not found: ID does not exist" containerID="7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c" Jan 29 07:33:04 crc kubenswrapper[5017]: I0129 07:33:04.766188 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c"} err="failed to get container status \"7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c\": rpc error: code = NotFound desc = could not find container \"7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c\": container with ID starting with 7d36d318eb5677947de9a2b2866627a20154f24985247675df57e3cab1344e4c not found: ID does not exist" Jan 29 07:33:06 crc kubenswrapper[5017]: I0129 07:33:06.335960 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae38964-01c4-45f6-9761-80d52b27d664" path="/var/lib/kubelet/pods/fae38964-01c4-45f6-9761-80d52b27d664/volumes" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.756650 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v59f6"] Jan 29 07:33:38 crc kubenswrapper[5017]: E0129 07:33:38.757677 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae699d67-c99d-407a-b414-50a86da3d045" containerName="extract-content" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.757725 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae699d67-c99d-407a-b414-50a86da3d045" containerName="extract-content" Jan 29 07:33:38 crc kubenswrapper[5017]: E0129 07:33:38.757747 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae38964-01c4-45f6-9761-80d52b27d664" containerName="extract-utilities" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.757758 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae38964-01c4-45f6-9761-80d52b27d664" containerName="extract-utilities" Jan 29 07:33:38 crc kubenswrapper[5017]: E0129 07:33:38.757770 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae38964-01c4-45f6-9761-80d52b27d664" containerName="extract-content" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.757778 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae38964-01c4-45f6-9761-80d52b27d664" containerName="extract-content" Jan 29 07:33:38 crc kubenswrapper[5017]: E0129 07:33:38.757794 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae699d67-c99d-407a-b414-50a86da3d045" containerName="extract-utilities" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.757801 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae699d67-c99d-407a-b414-50a86da3d045" containerName="extract-utilities" Jan 29 07:33:38 crc kubenswrapper[5017]: E0129 07:33:38.757826 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae38964-01c4-45f6-9761-80d52b27d664" containerName="registry-server" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.757834 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae38964-01c4-45f6-9761-80d52b27d664" containerName="registry-server" Jan 29 07:33:38 crc kubenswrapper[5017]: E0129 07:33:38.757848 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae699d67-c99d-407a-b414-50a86da3d045" containerName="registry-server" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.757857 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae699d67-c99d-407a-b414-50a86da3d045" containerName="registry-server" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.758052 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae38964-01c4-45f6-9761-80d52b27d664" containerName="registry-server" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.758069 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae699d67-c99d-407a-b414-50a86da3d045" containerName="registry-server" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.759358 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.819635 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v59f6"] Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.827279 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-utilities\") pod \"certified-operators-v59f6\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.827337 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zflnr\" (UniqueName: \"kubernetes.io/projected/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-kube-api-access-zflnr\") pod \"certified-operators-v59f6\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.827391 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-catalog-content\") pod \"certified-operators-v59f6\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.928602 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zflnr\" (UniqueName: \"kubernetes.io/projected/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-kube-api-access-zflnr\") pod \"certified-operators-v59f6\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.928705 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-catalog-content\") pod \"certified-operators-v59f6\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.928775 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-utilities\") pod \"certified-operators-v59f6\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.929263 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-utilities\") pod \"certified-operators-v59f6\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.929301 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-catalog-content\") pod \"certified-operators-v59f6\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:38 crc kubenswrapper[5017]: I0129 07:33:38.951737 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zflnr\" (UniqueName: \"kubernetes.io/projected/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-kube-api-access-zflnr\") pod \"certified-operators-v59f6\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:39 crc kubenswrapper[5017]: I0129 07:33:39.080941 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:39 crc kubenswrapper[5017]: I0129 07:33:39.620649 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v59f6"] Jan 29 07:33:39 crc kubenswrapper[5017]: I0129 07:33:39.902776 5017 generic.go:334] "Generic (PLEG): container finished" podID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerID="1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54" exitCode=0 Jan 29 07:33:39 crc kubenswrapper[5017]: I0129 07:33:39.902832 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v59f6" event={"ID":"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3","Type":"ContainerDied","Data":"1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54"} Jan 29 07:33:39 crc kubenswrapper[5017]: I0129 07:33:39.902867 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v59f6" event={"ID":"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3","Type":"ContainerStarted","Data":"a152dd23420f7c24561add81eb82494f1377caf02df55fb4cd8992f29a1295dc"} Jan 29 07:33:40 crc kubenswrapper[5017]: I0129 07:33:40.913723 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v59f6" event={"ID":"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3","Type":"ContainerStarted","Data":"75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61"} Jan 29 07:33:41 crc kubenswrapper[5017]: I0129 07:33:41.924987 5017 generic.go:334] "Generic (PLEG): container finished" podID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerID="75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61" exitCode=0 Jan 29 07:33:41 crc kubenswrapper[5017]: I0129 07:33:41.925069 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v59f6" event={"ID":"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3","Type":"ContainerDied","Data":"75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61"} Jan 29 07:33:42 crc kubenswrapper[5017]: I0129 07:33:42.936042 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v59f6" event={"ID":"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3","Type":"ContainerStarted","Data":"05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16"} Jan 29 07:33:42 crc kubenswrapper[5017]: I0129 07:33:42.958133 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v59f6" podStartSLOduration=2.54788567 podStartE2EDuration="4.958103538s" podCreationTimestamp="2026-01-29 07:33:38 +0000 UTC" firstStartedPulling="2026-01-29 07:33:39.904598892 +0000 UTC m=+3506.279046522" lastFinishedPulling="2026-01-29 07:33:42.31481677 +0000 UTC m=+3508.689264390" observedRunningTime="2026-01-29 07:33:42.954685423 +0000 UTC m=+3509.329133053" watchObservedRunningTime="2026-01-29 07:33:42.958103538 +0000 UTC m=+3509.332551158" Jan 29 07:33:49 crc kubenswrapper[5017]: I0129 07:33:49.081906 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:49 crc kubenswrapper[5017]: I0129 07:33:49.082846 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:49 crc kubenswrapper[5017]: I0129 07:33:49.146374 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:50 crc kubenswrapper[5017]: I0129 07:33:50.045970 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:50 crc kubenswrapper[5017]: I0129 07:33:50.130408 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v59f6"] Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.009408 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v59f6" podUID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerName="registry-server" containerID="cri-o://05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16" gracePeriod=2 Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.513099 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.623137 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-utilities\") pod \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.623282 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zflnr\" (UniqueName: \"kubernetes.io/projected/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-kube-api-access-zflnr\") pod \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.623343 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-catalog-content\") pod \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\" (UID: \"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3\") " Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.624734 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-utilities" (OuterVolumeSpecName: "utilities") pod "bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" (UID: "bb276dda-a875-4d92-8d0d-9b5faa0b2bb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.629128 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-kube-api-access-zflnr" (OuterVolumeSpecName: "kube-api-access-zflnr") pod "bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" (UID: "bb276dda-a875-4d92-8d0d-9b5faa0b2bb3"). InnerVolumeSpecName "kube-api-access-zflnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.678036 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" (UID: "bb276dda-a875-4d92-8d0d-9b5faa0b2bb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.725118 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.725158 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zflnr\" (UniqueName: \"kubernetes.io/projected/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-kube-api-access-zflnr\") on node \"crc\" DevicePath \"\"" Jan 29 07:33:52 crc kubenswrapper[5017]: I0129 07:33:52.725172 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.019813 5017 generic.go:334] "Generic (PLEG): container finished" podID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerID="05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16" exitCode=0 Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.019867 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v59f6" event={"ID":"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3","Type":"ContainerDied","Data":"05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16"} Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.019901 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v59f6" event={"ID":"bb276dda-a875-4d92-8d0d-9b5faa0b2bb3","Type":"ContainerDied","Data":"a152dd23420f7c24561add81eb82494f1377caf02df55fb4cd8992f29a1295dc"} Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.019924 5017 scope.go:117] "RemoveContainer" containerID="05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.020113 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v59f6" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.047070 5017 scope.go:117] "RemoveContainer" containerID="75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.065386 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v59f6"] Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.070896 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v59f6"] Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.080373 5017 scope.go:117] "RemoveContainer" containerID="1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.107149 5017 scope.go:117] "RemoveContainer" containerID="05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16" Jan 29 07:33:53 crc kubenswrapper[5017]: E0129 07:33:53.107883 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16\": container with ID starting with 05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16 not found: ID does not exist" containerID="05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.107913 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16"} err="failed to get container status \"05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16\": rpc error: code = NotFound desc = could not find container \"05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16\": container with ID starting with 05933001fb94275b40c57a8570cd2b55de85c7478d61331e10e19afed3af2a16 not found: ID does not exist" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.107938 5017 scope.go:117] "RemoveContainer" containerID="75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61" Jan 29 07:33:53 crc kubenswrapper[5017]: E0129 07:33:53.108509 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61\": container with ID starting with 75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61 not found: ID does not exist" containerID="75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.108633 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61"} err="failed to get container status \"75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61\": rpc error: code = NotFound desc = could not find container \"75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61\": container with ID starting with 75952ced8230f0c924685cdbccd3d25c7a7712a0bb0ab56dca08dd77e7e41e61 not found: ID does not exist" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.108731 5017 scope.go:117] "RemoveContainer" containerID="1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54" Jan 29 07:33:53 crc kubenswrapper[5017]: E0129 07:33:53.109553 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54\": container with ID starting with 1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54 not found: ID does not exist" containerID="1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54" Jan 29 07:33:53 crc kubenswrapper[5017]: I0129 07:33:53.109575 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54"} err="failed to get container status \"1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54\": rpc error: code = NotFound desc = could not find container \"1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54\": container with ID starting with 1d5b0d0708e3d4b49527cec63924f6e6defc268d7c0ffd03a5279134d2a4de54 not found: ID does not exist" Jan 29 07:33:54 crc kubenswrapper[5017]: I0129 07:33:54.327584 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" path="/var/lib/kubelet/pods/bb276dda-a875-4d92-8d0d-9b5faa0b2bb3/volumes" Jan 29 07:34:26 crc kubenswrapper[5017]: I0129 07:34:26.539319 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:34:26 crc kubenswrapper[5017]: I0129 07:34:26.540568 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:34:56 crc kubenswrapper[5017]: I0129 07:34:56.538988 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:34:56 crc kubenswrapper[5017]: I0129 07:34:56.540062 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:35:26 crc kubenswrapper[5017]: I0129 07:35:26.539039 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:35:26 crc kubenswrapper[5017]: I0129 07:35:26.540099 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:35:26 crc kubenswrapper[5017]: I0129 07:35:26.540163 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:35:26 crc kubenswrapper[5017]: I0129 07:35:26.541090 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:35:26 crc kubenswrapper[5017]: I0129 07:35:26.541187 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" gracePeriod=600 Jan 29 07:35:26 crc kubenswrapper[5017]: E0129 07:35:26.686298 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:35:26 crc kubenswrapper[5017]: I0129 07:35:26.890838 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" exitCode=0 Jan 29 07:35:26 crc kubenswrapper[5017]: I0129 07:35:26.890895 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5"} Jan 29 07:35:26 crc kubenswrapper[5017]: I0129 07:35:26.890966 5017 scope.go:117] "RemoveContainer" containerID="98c58df4df387cb44119e0981b72137ad25fa21293135d12b782bb9ac47f771f" Jan 29 07:35:26 crc kubenswrapper[5017]: I0129 07:35:26.891588 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:35:26 crc kubenswrapper[5017]: E0129 07:35:26.891835 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:35:40 crc kubenswrapper[5017]: I0129 07:35:40.317477 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:35:40 crc kubenswrapper[5017]: E0129 07:35:40.318857 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:35:51 crc kubenswrapper[5017]: I0129 07:35:51.317363 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:35:51 crc kubenswrapper[5017]: E0129 07:35:51.318729 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:36:02 crc kubenswrapper[5017]: I0129 07:36:02.316765 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:36:02 crc kubenswrapper[5017]: E0129 07:36:02.317937 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:36:17 crc kubenswrapper[5017]: I0129 07:36:17.317260 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:36:17 crc kubenswrapper[5017]: E0129 07:36:17.318542 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:36:30 crc kubenswrapper[5017]: I0129 07:36:30.317705 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:36:30 crc kubenswrapper[5017]: E0129 07:36:30.319123 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:36:43 crc kubenswrapper[5017]: I0129 07:36:43.316530 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:36:43 crc kubenswrapper[5017]: E0129 07:36:43.317708 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:36:57 crc kubenswrapper[5017]: I0129 07:36:57.316805 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:36:57 crc kubenswrapper[5017]: E0129 07:36:57.318281 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:37:11 crc kubenswrapper[5017]: I0129 07:37:11.316096 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:37:11 crc kubenswrapper[5017]: E0129 07:37:11.319054 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:37:22 crc kubenswrapper[5017]: I0129 07:37:22.316677 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:37:22 crc kubenswrapper[5017]: E0129 07:37:22.321261 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:37:36 crc kubenswrapper[5017]: I0129 07:37:36.317244 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:37:36 crc kubenswrapper[5017]: E0129 07:37:36.319412 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:37:50 crc kubenswrapper[5017]: I0129 07:37:50.316017 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:37:50 crc kubenswrapper[5017]: E0129 07:37:50.317507 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:38:03 crc kubenswrapper[5017]: I0129 07:38:03.316169 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:38:03 crc kubenswrapper[5017]: E0129 07:38:03.317044 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:38:14 crc kubenswrapper[5017]: I0129 07:38:14.327438 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:38:14 crc kubenswrapper[5017]: E0129 07:38:14.328584 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:38:29 crc kubenswrapper[5017]: I0129 07:38:29.316250 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:38:29 crc kubenswrapper[5017]: E0129 07:38:29.317284 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:38:43 crc kubenswrapper[5017]: I0129 07:38:43.316319 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:38:43 crc kubenswrapper[5017]: E0129 07:38:43.317606 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:38:56 crc kubenswrapper[5017]: I0129 07:38:56.316605 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:38:56 crc kubenswrapper[5017]: E0129 07:38:56.317563 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:39:07 crc kubenswrapper[5017]: I0129 07:39:07.316553 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:39:07 crc kubenswrapper[5017]: E0129 07:39:07.317543 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:39:21 crc kubenswrapper[5017]: I0129 07:39:21.316296 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:39:21 crc kubenswrapper[5017]: E0129 07:39:21.317253 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:39:22 crc kubenswrapper[5017]: I0129 07:39:22.489673 5017 scope.go:117] "RemoveContainer" containerID="e71a8adb466daf0ce54816f5707107f48fbcfbce227fbb210a1be7ab51b541b3" Jan 29 07:39:22 crc kubenswrapper[5017]: I0129 07:39:22.603498 5017 scope.go:117] "RemoveContainer" containerID="e3f68cc8aff8987b539f54f394a6deed5524a8ec4e7f5399ddbebd4258f5062c" Jan 29 07:39:22 crc kubenswrapper[5017]: I0129 07:39:22.623658 5017 scope.go:117] "RemoveContainer" containerID="7959007233ea9638581924da153eb53e50558dd0087b69e1955b666a9596b830" Jan 29 07:39:34 crc kubenswrapper[5017]: I0129 07:39:34.321638 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:39:34 crc kubenswrapper[5017]: E0129 07:39:34.325075 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:39:46 crc kubenswrapper[5017]: I0129 07:39:46.316037 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:39:46 crc kubenswrapper[5017]: E0129 07:39:46.316990 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:40:01 crc kubenswrapper[5017]: I0129 07:40:01.316229 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:40:01 crc kubenswrapper[5017]: E0129 07:40:01.317258 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:40:13 crc kubenswrapper[5017]: I0129 07:40:13.317097 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:40:13 crc kubenswrapper[5017]: E0129 07:40:13.318163 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:40:25 crc kubenswrapper[5017]: I0129 07:40:25.315768 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:40:25 crc kubenswrapper[5017]: E0129 07:40:25.316890 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:40:38 crc kubenswrapper[5017]: I0129 07:40:38.317890 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:40:38 crc kubenswrapper[5017]: I0129 07:40:38.774096 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"eb998d5ae2b03c78419bca96658b161f9df20e5df45dc5009deb04c807457a71"} Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.179210 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-74bx7"] Jan 29 07:41:57 crc kubenswrapper[5017]: E0129 07:41:57.180382 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerName="extract-utilities" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.180399 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerName="extract-utilities" Jan 29 07:41:57 crc kubenswrapper[5017]: E0129 07:41:57.180421 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerName="registry-server" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.180428 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerName="registry-server" Jan 29 07:41:57 crc kubenswrapper[5017]: E0129 07:41:57.180455 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerName="extract-content" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.180462 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerName="extract-content" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.180605 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb276dda-a875-4d92-8d0d-9b5faa0b2bb3" containerName="registry-server" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.187739 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.206643 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bx7"] Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.318013 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mllhj\" (UniqueName: \"kubernetes.io/projected/dabe553c-d3e3-4a50-a587-13acddde92db-kube-api-access-mllhj\") pod \"redhat-marketplace-74bx7\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.318249 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-utilities\") pod \"redhat-marketplace-74bx7\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.318642 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-catalog-content\") pod \"redhat-marketplace-74bx7\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.420384 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-utilities\") pod \"redhat-marketplace-74bx7\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.420445 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-catalog-content\") pod \"redhat-marketplace-74bx7\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.420519 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mllhj\" (UniqueName: \"kubernetes.io/projected/dabe553c-d3e3-4a50-a587-13acddde92db-kube-api-access-mllhj\") pod \"redhat-marketplace-74bx7\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.421071 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-utilities\") pod \"redhat-marketplace-74bx7\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.421304 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-catalog-content\") pod \"redhat-marketplace-74bx7\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.447134 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mllhj\" (UniqueName: \"kubernetes.io/projected/dabe553c-d3e3-4a50-a587-13acddde92db-kube-api-access-mllhj\") pod \"redhat-marketplace-74bx7\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:57 crc kubenswrapper[5017]: I0129 07:41:57.531646 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:41:58 crc kubenswrapper[5017]: I0129 07:41:58.021285 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bx7"] Jan 29 07:41:58 crc kubenswrapper[5017]: I0129 07:41:58.460329 5017 generic.go:334] "Generic (PLEG): container finished" podID="dabe553c-d3e3-4a50-a587-13acddde92db" containerID="89e2b3dd751c4be0aed20ad163ad5abd33e10ed73737efb7c146c06dfc6af2cf" exitCode=0 Jan 29 07:41:58 crc kubenswrapper[5017]: I0129 07:41:58.460420 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bx7" event={"ID":"dabe553c-d3e3-4a50-a587-13acddde92db","Type":"ContainerDied","Data":"89e2b3dd751c4be0aed20ad163ad5abd33e10ed73737efb7c146c06dfc6af2cf"} Jan 29 07:41:58 crc kubenswrapper[5017]: I0129 07:41:58.462667 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bx7" event={"ID":"dabe553c-d3e3-4a50-a587-13acddde92db","Type":"ContainerStarted","Data":"0437b8f00bb329235b2dccc0c493f4076887219c0a32ef9182f1a482833d95c2"} Jan 29 07:41:58 crc kubenswrapper[5017]: I0129 07:41:58.463373 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:41:59 crc kubenswrapper[5017]: I0129 07:41:59.475058 5017 generic.go:334] "Generic (PLEG): container finished" podID="dabe553c-d3e3-4a50-a587-13acddde92db" containerID="70f2e9cde3f4153973bfd0f7ff700a72eac88234927a74a5b41bb1320d62012c" exitCode=0 Jan 29 07:41:59 crc kubenswrapper[5017]: I0129 07:41:59.475107 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bx7" event={"ID":"dabe553c-d3e3-4a50-a587-13acddde92db","Type":"ContainerDied","Data":"70f2e9cde3f4153973bfd0f7ff700a72eac88234927a74a5b41bb1320d62012c"} Jan 29 07:42:00 crc kubenswrapper[5017]: I0129 07:42:00.489125 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bx7" event={"ID":"dabe553c-d3e3-4a50-a587-13acddde92db","Type":"ContainerStarted","Data":"7f5d35d414c479009feaea65310508899b3db0dfb85121ba9e1fd06f0c3e5fa6"} Jan 29 07:42:00 crc kubenswrapper[5017]: I0129 07:42:00.524338 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-74bx7" podStartSLOduration=2.131737507 podStartE2EDuration="3.524316528s" podCreationTimestamp="2026-01-29 07:41:57 +0000 UTC" firstStartedPulling="2026-01-29 07:41:58.463083162 +0000 UTC m=+4004.837530772" lastFinishedPulling="2026-01-29 07:41:59.855662183 +0000 UTC m=+4006.230109793" observedRunningTime="2026-01-29 07:42:00.522615727 +0000 UTC m=+4006.897063347" watchObservedRunningTime="2026-01-29 07:42:00.524316528 +0000 UTC m=+4006.898764138" Jan 29 07:42:07 crc kubenswrapper[5017]: I0129 07:42:07.532037 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:42:07 crc kubenswrapper[5017]: I0129 07:42:07.534048 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:42:07 crc kubenswrapper[5017]: I0129 07:42:07.607494 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:42:08 crc kubenswrapper[5017]: I0129 07:42:08.623841 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.161655 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bx7"] Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.162236 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-74bx7" podUID="dabe553c-d3e3-4a50-a587-13acddde92db" containerName="registry-server" containerID="cri-o://7f5d35d414c479009feaea65310508899b3db0dfb85121ba9e1fd06f0c3e5fa6" gracePeriod=2 Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.586652 5017 generic.go:334] "Generic (PLEG): container finished" podID="dabe553c-d3e3-4a50-a587-13acddde92db" containerID="7f5d35d414c479009feaea65310508899b3db0dfb85121ba9e1fd06f0c3e5fa6" exitCode=0 Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.586737 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bx7" event={"ID":"dabe553c-d3e3-4a50-a587-13acddde92db","Type":"ContainerDied","Data":"7f5d35d414c479009feaea65310508899b3db0dfb85121ba9e1fd06f0c3e5fa6"} Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.587282 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74bx7" event={"ID":"dabe553c-d3e3-4a50-a587-13acddde92db","Type":"ContainerDied","Data":"0437b8f00bb329235b2dccc0c493f4076887219c0a32ef9182f1a482833d95c2"} Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.587309 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0437b8f00bb329235b2dccc0c493f4076887219c0a32ef9182f1a482833d95c2" Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.625229 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.678078 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-utilities\") pod \"dabe553c-d3e3-4a50-a587-13acddde92db\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.678270 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-catalog-content\") pod \"dabe553c-d3e3-4a50-a587-13acddde92db\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.678338 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mllhj\" (UniqueName: \"kubernetes.io/projected/dabe553c-d3e3-4a50-a587-13acddde92db-kube-api-access-mllhj\") pod \"dabe553c-d3e3-4a50-a587-13acddde92db\" (UID: \"dabe553c-d3e3-4a50-a587-13acddde92db\") " Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.679066 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-utilities" (OuterVolumeSpecName: "utilities") pod "dabe553c-d3e3-4a50-a587-13acddde92db" (UID: "dabe553c-d3e3-4a50-a587-13acddde92db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.685212 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabe553c-d3e3-4a50-a587-13acddde92db-kube-api-access-mllhj" (OuterVolumeSpecName: "kube-api-access-mllhj") pod "dabe553c-d3e3-4a50-a587-13acddde92db" (UID: "dabe553c-d3e3-4a50-a587-13acddde92db"). InnerVolumeSpecName "kube-api-access-mllhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.702267 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dabe553c-d3e3-4a50-a587-13acddde92db" (UID: "dabe553c-d3e3-4a50-a587-13acddde92db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.780404 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mllhj\" (UniqueName: \"kubernetes.io/projected/dabe553c-d3e3-4a50-a587-13acddde92db-kube-api-access-mllhj\") on node \"crc\" DevicePath \"\"" Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.780444 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:42:11 crc kubenswrapper[5017]: I0129 07:42:11.780455 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabe553c-d3e3-4a50-a587-13acddde92db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:42:12 crc kubenswrapper[5017]: I0129 07:42:12.602483 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74bx7" Jan 29 07:42:12 crc kubenswrapper[5017]: I0129 07:42:12.634967 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bx7"] Jan 29 07:42:12 crc kubenswrapper[5017]: I0129 07:42:12.641539 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-74bx7"] Jan 29 07:42:14 crc kubenswrapper[5017]: I0129 07:42:14.333379 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabe553c-d3e3-4a50-a587-13acddde92db" path="/var/lib/kubelet/pods/dabe553c-d3e3-4a50-a587-13acddde92db/volumes" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.065401 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tczmc"] Jan 29 07:42:54 crc kubenswrapper[5017]: E0129 07:42:54.066421 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabe553c-d3e3-4a50-a587-13acddde92db" containerName="extract-content" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.066441 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabe553c-d3e3-4a50-a587-13acddde92db" containerName="extract-content" Jan 29 07:42:54 crc kubenswrapper[5017]: E0129 07:42:54.066483 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabe553c-d3e3-4a50-a587-13acddde92db" containerName="registry-server" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.066493 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabe553c-d3e3-4a50-a587-13acddde92db" containerName="registry-server" Jan 29 07:42:54 crc kubenswrapper[5017]: E0129 07:42:54.066511 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabe553c-d3e3-4a50-a587-13acddde92db" containerName="extract-utilities" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.066522 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabe553c-d3e3-4a50-a587-13acddde92db" containerName="extract-utilities" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.066716 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabe553c-d3e3-4a50-a587-13acddde92db" containerName="registry-server" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.068525 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.080236 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tczmc"] Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.181216 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-catalog-content\") pod \"community-operators-tczmc\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.181615 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpw5x\" (UniqueName: \"kubernetes.io/projected/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-kube-api-access-vpw5x\") pod \"community-operators-tczmc\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.181780 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-utilities\") pod \"community-operators-tczmc\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.283559 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpw5x\" (UniqueName: \"kubernetes.io/projected/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-kube-api-access-vpw5x\") pod \"community-operators-tczmc\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.283614 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-utilities\") pod \"community-operators-tczmc\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.283710 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-catalog-content\") pod \"community-operators-tczmc\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.284254 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-catalog-content\") pod \"community-operators-tczmc\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.284525 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-utilities\") pod \"community-operators-tczmc\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.309603 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpw5x\" (UniqueName: \"kubernetes.io/projected/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-kube-api-access-vpw5x\") pod \"community-operators-tczmc\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.390463 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.958227 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tczmc"] Jan 29 07:42:54 crc kubenswrapper[5017]: I0129 07:42:54.990842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tczmc" event={"ID":"d709e6bd-06d9-43fc-9b00-d9c07e54bd60","Type":"ContainerStarted","Data":"81607f76ffd95264fbab0d63ae2251eb8ed35009e2a33d20986c580fa2fede99"} Jan 29 07:42:56 crc kubenswrapper[5017]: I0129 07:42:56.000858 5017 generic.go:334] "Generic (PLEG): container finished" podID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerID="0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb" exitCode=0 Jan 29 07:42:56 crc kubenswrapper[5017]: I0129 07:42:56.000949 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tczmc" event={"ID":"d709e6bd-06d9-43fc-9b00-d9c07e54bd60","Type":"ContainerDied","Data":"0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb"} Jan 29 07:42:56 crc kubenswrapper[5017]: I0129 07:42:56.538864 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:42:56 crc kubenswrapper[5017]: I0129 07:42:56.539034 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:42:57 crc kubenswrapper[5017]: I0129 07:42:57.011584 5017 generic.go:334] "Generic (PLEG): container finished" podID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerID="8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7" exitCode=0 Jan 29 07:42:57 crc kubenswrapper[5017]: I0129 07:42:57.011645 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tczmc" event={"ID":"d709e6bd-06d9-43fc-9b00-d9c07e54bd60","Type":"ContainerDied","Data":"8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7"} Jan 29 07:42:58 crc kubenswrapper[5017]: I0129 07:42:58.025905 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tczmc" event={"ID":"d709e6bd-06d9-43fc-9b00-d9c07e54bd60","Type":"ContainerStarted","Data":"051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24"} Jan 29 07:42:58 crc kubenswrapper[5017]: I0129 07:42:58.049107 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tczmc" podStartSLOduration=2.624351878 podStartE2EDuration="4.04907893s" podCreationTimestamp="2026-01-29 07:42:54 +0000 UTC" firstStartedPulling="2026-01-29 07:42:56.003702574 +0000 UTC m=+4062.378150224" lastFinishedPulling="2026-01-29 07:42:57.428429606 +0000 UTC m=+4063.802877276" observedRunningTime="2026-01-29 07:42:58.0454209 +0000 UTC m=+4064.419868570" watchObservedRunningTime="2026-01-29 07:42:58.04907893 +0000 UTC m=+4064.423526550" Jan 29 07:43:04 crc kubenswrapper[5017]: I0129 07:43:04.391253 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:43:04 crc kubenswrapper[5017]: I0129 07:43:04.392269 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:43:04 crc kubenswrapper[5017]: I0129 07:43:04.433214 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:43:05 crc kubenswrapper[5017]: I0129 07:43:05.145075 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:43:05 crc kubenswrapper[5017]: I0129 07:43:05.204426 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tczmc"] Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.114385 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tczmc" podUID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerName="registry-server" containerID="cri-o://051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24" gracePeriod=2 Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.561478 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.716117 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-catalog-content\") pod \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.716255 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpw5x\" (UniqueName: \"kubernetes.io/projected/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-kube-api-access-vpw5x\") pod \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.716403 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-utilities\") pod \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\" (UID: \"d709e6bd-06d9-43fc-9b00-d9c07e54bd60\") " Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.717562 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-utilities" (OuterVolumeSpecName: "utilities") pod "d709e6bd-06d9-43fc-9b00-d9c07e54bd60" (UID: "d709e6bd-06d9-43fc-9b00-d9c07e54bd60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.724396 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-kube-api-access-vpw5x" (OuterVolumeSpecName: "kube-api-access-vpw5x") pod "d709e6bd-06d9-43fc-9b00-d9c07e54bd60" (UID: "d709e6bd-06d9-43fc-9b00-d9c07e54bd60"). InnerVolumeSpecName "kube-api-access-vpw5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.780448 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d709e6bd-06d9-43fc-9b00-d9c07e54bd60" (UID: "d709e6bd-06d9-43fc-9b00-d9c07e54bd60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.818766 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.818814 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:43:07 crc kubenswrapper[5017]: I0129 07:43:07.818832 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpw5x\" (UniqueName: \"kubernetes.io/projected/d709e6bd-06d9-43fc-9b00-d9c07e54bd60-kube-api-access-vpw5x\") on node \"crc\" DevicePath \"\"" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.125276 5017 generic.go:334] "Generic (PLEG): container finished" podID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerID="051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24" exitCode=0 Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.125379 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tczmc" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.125354 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tczmc" event={"ID":"d709e6bd-06d9-43fc-9b00-d9c07e54bd60","Type":"ContainerDied","Data":"051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24"} Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.126333 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tczmc" event={"ID":"d709e6bd-06d9-43fc-9b00-d9c07e54bd60","Type":"ContainerDied","Data":"81607f76ffd95264fbab0d63ae2251eb8ed35009e2a33d20986c580fa2fede99"} Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.126364 5017 scope.go:117] "RemoveContainer" containerID="051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.156923 5017 scope.go:117] "RemoveContainer" containerID="8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.161616 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tczmc"] Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.169402 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tczmc"] Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.187819 5017 scope.go:117] "RemoveContainer" containerID="0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.209609 5017 scope.go:117] "RemoveContainer" containerID="051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24" Jan 29 07:43:08 crc kubenswrapper[5017]: E0129 07:43:08.211107 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24\": container with ID starting with 051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24 not found: ID does not exist" containerID="051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.211158 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24"} err="failed to get container status \"051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24\": rpc error: code = NotFound desc = could not find container \"051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24\": container with ID starting with 051a04d2bb4c3700be9b86788c8d2f798f6c85d1f8b1999e20ea3773cd3eab24 not found: ID does not exist" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.211193 5017 scope.go:117] "RemoveContainer" containerID="8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7" Jan 29 07:43:08 crc kubenswrapper[5017]: E0129 07:43:08.211462 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7\": container with ID starting with 8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7 not found: ID does not exist" containerID="8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.211501 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7"} err="failed to get container status \"8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7\": rpc error: code = NotFound desc = could not find container \"8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7\": container with ID starting with 8e3c53a5521ccf7ac09b47fb28589f5b487d60af76d5f4d85b865ae58223aae7 not found: ID does not exist" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.211521 5017 scope.go:117] "RemoveContainer" containerID="0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb" Jan 29 07:43:08 crc kubenswrapper[5017]: E0129 07:43:08.213063 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb\": container with ID starting with 0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb not found: ID does not exist" containerID="0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.213123 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb"} err="failed to get container status \"0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb\": rpc error: code = NotFound desc = could not find container \"0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb\": container with ID starting with 0ae789f84c4506b7f64b8a2893363f47db7f4c09092e0c156f21458e62f34ebb not found: ID does not exist" Jan 29 07:43:08 crc kubenswrapper[5017]: I0129 07:43:08.326258 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" path="/var/lib/kubelet/pods/d709e6bd-06d9-43fc-9b00-d9c07e54bd60/volumes" Jan 29 07:43:26 crc kubenswrapper[5017]: I0129 07:43:26.539366 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:43:26 crc kubenswrapper[5017]: I0129 07:43:26.540444 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.398649 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nnc2s"] Jan 29 07:43:31 crc kubenswrapper[5017]: E0129 07:43:31.399586 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerName="extract-content" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.399607 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerName="extract-content" Jan 29 07:43:31 crc kubenswrapper[5017]: E0129 07:43:31.399631 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerName="extract-utilities" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.399639 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerName="extract-utilities" Jan 29 07:43:31 crc kubenswrapper[5017]: E0129 07:43:31.399649 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerName="registry-server" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.399658 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerName="registry-server" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.399823 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d709e6bd-06d9-43fc-9b00-d9c07e54bd60" containerName="registry-server" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.401171 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.430273 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nnc2s"] Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.534636 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8cz\" (UniqueName: \"kubernetes.io/projected/8844c8da-85f3-4983-ac54-cb1f84673bba-kube-api-access-sb8cz\") pod \"redhat-operators-nnc2s\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.534765 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-utilities\") pod \"redhat-operators-nnc2s\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.534932 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-catalog-content\") pod \"redhat-operators-nnc2s\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.637130 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8cz\" (UniqueName: \"kubernetes.io/projected/8844c8da-85f3-4983-ac54-cb1f84673bba-kube-api-access-sb8cz\") pod \"redhat-operators-nnc2s\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.637201 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-utilities\") pod \"redhat-operators-nnc2s\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.637246 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-catalog-content\") pod \"redhat-operators-nnc2s\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.638080 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-utilities\") pod \"redhat-operators-nnc2s\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.638162 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-catalog-content\") pod \"redhat-operators-nnc2s\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.658928 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8cz\" (UniqueName: \"kubernetes.io/projected/8844c8da-85f3-4983-ac54-cb1f84673bba-kube-api-access-sb8cz\") pod \"redhat-operators-nnc2s\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:31 crc kubenswrapper[5017]: I0129 07:43:31.724156 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:32 crc kubenswrapper[5017]: I0129 07:43:32.215353 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nnc2s"] Jan 29 07:43:32 crc kubenswrapper[5017]: I0129 07:43:32.326328 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnc2s" event={"ID":"8844c8da-85f3-4983-ac54-cb1f84673bba","Type":"ContainerStarted","Data":"cf9199237a8485b62ce49312905d3c6f49cead0533e7c49b7d508dbdda66e8e3"} Jan 29 07:43:33 crc kubenswrapper[5017]: I0129 07:43:33.331390 5017 generic.go:334] "Generic (PLEG): container finished" podID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerID="a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93" exitCode=0 Jan 29 07:43:33 crc kubenswrapper[5017]: I0129 07:43:33.331504 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnc2s" event={"ID":"8844c8da-85f3-4983-ac54-cb1f84673bba","Type":"ContainerDied","Data":"a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93"} Jan 29 07:43:34 crc kubenswrapper[5017]: I0129 07:43:34.343555 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnc2s" event={"ID":"8844c8da-85f3-4983-ac54-cb1f84673bba","Type":"ContainerStarted","Data":"067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f"} Jan 29 07:43:35 crc kubenswrapper[5017]: I0129 07:43:35.353880 5017 generic.go:334] "Generic (PLEG): container finished" podID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerID="067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f" exitCode=0 Jan 29 07:43:35 crc kubenswrapper[5017]: I0129 07:43:35.353941 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnc2s" event={"ID":"8844c8da-85f3-4983-ac54-cb1f84673bba","Type":"ContainerDied","Data":"067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f"} Jan 29 07:43:36 crc kubenswrapper[5017]: I0129 07:43:36.367066 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnc2s" event={"ID":"8844c8da-85f3-4983-ac54-cb1f84673bba","Type":"ContainerStarted","Data":"c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d"} Jan 29 07:43:36 crc kubenswrapper[5017]: I0129 07:43:36.394890 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nnc2s" podStartSLOduration=2.9580556529999997 podStartE2EDuration="5.394867998s" podCreationTimestamp="2026-01-29 07:43:31 +0000 UTC" firstStartedPulling="2026-01-29 07:43:33.333301588 +0000 UTC m=+4099.707749198" lastFinishedPulling="2026-01-29 07:43:35.770113933 +0000 UTC m=+4102.144561543" observedRunningTime="2026-01-29 07:43:36.390588363 +0000 UTC m=+4102.765035983" watchObservedRunningTime="2026-01-29 07:43:36.394867998 +0000 UTC m=+4102.769315618" Jan 29 07:43:41 crc kubenswrapper[5017]: I0129 07:43:41.724631 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:41 crc kubenswrapper[5017]: I0129 07:43:41.725842 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:42 crc kubenswrapper[5017]: I0129 07:43:42.772836 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nnc2s" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerName="registry-server" probeResult="failure" output=< Jan 29 07:43:42 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 07:43:42 crc kubenswrapper[5017]: > Jan 29 07:43:51 crc kubenswrapper[5017]: I0129 07:43:51.774122 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:51 crc kubenswrapper[5017]: I0129 07:43:51.838352 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:52 crc kubenswrapper[5017]: I0129 07:43:52.025602 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nnc2s"] Jan 29 07:43:53 crc kubenswrapper[5017]: I0129 07:43:53.508830 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nnc2s" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerName="registry-server" containerID="cri-o://c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d" gracePeriod=2 Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.343604 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.521810 5017 generic.go:334] "Generic (PLEG): container finished" podID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerID="c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d" exitCode=0 Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.521869 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnc2s" event={"ID":"8844c8da-85f3-4983-ac54-cb1f84673bba","Type":"ContainerDied","Data":"c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d"} Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.521884 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nnc2s" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.521913 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnc2s" event={"ID":"8844c8da-85f3-4983-ac54-cb1f84673bba","Type":"ContainerDied","Data":"cf9199237a8485b62ce49312905d3c6f49cead0533e7c49b7d508dbdda66e8e3"} Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.521940 5017 scope.go:117] "RemoveContainer" containerID="c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.537835 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb8cz\" (UniqueName: \"kubernetes.io/projected/8844c8da-85f3-4983-ac54-cb1f84673bba-kube-api-access-sb8cz\") pod \"8844c8da-85f3-4983-ac54-cb1f84673bba\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.538205 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-utilities\") pod \"8844c8da-85f3-4983-ac54-cb1f84673bba\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.538370 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-catalog-content\") pod \"8844c8da-85f3-4983-ac54-cb1f84673bba\" (UID: \"8844c8da-85f3-4983-ac54-cb1f84673bba\") " Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.539210 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-utilities" (OuterVolumeSpecName: "utilities") pod "8844c8da-85f3-4983-ac54-cb1f84673bba" (UID: "8844c8da-85f3-4983-ac54-cb1f84673bba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.546926 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8844c8da-85f3-4983-ac54-cb1f84673bba-kube-api-access-sb8cz" (OuterVolumeSpecName: "kube-api-access-sb8cz") pod "8844c8da-85f3-4983-ac54-cb1f84673bba" (UID: "8844c8da-85f3-4983-ac54-cb1f84673bba"). InnerVolumeSpecName "kube-api-access-sb8cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.547293 5017 scope.go:117] "RemoveContainer" containerID="067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.582723 5017 scope.go:117] "RemoveContainer" containerID="a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.605149 5017 scope.go:117] "RemoveContainer" containerID="c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d" Jan 29 07:43:54 crc kubenswrapper[5017]: E0129 07:43:54.609790 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d\": container with ID starting with c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d not found: ID does not exist" containerID="c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.609831 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d"} err="failed to get container status \"c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d\": rpc error: code = NotFound desc = could not find container \"c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d\": container with ID starting with c71bee7333fa295f0d3b2d21cc70d55563106598b5404abb9c17e47beeb4938d not found: ID does not exist" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.609865 5017 scope.go:117] "RemoveContainer" containerID="067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f" Jan 29 07:43:54 crc kubenswrapper[5017]: E0129 07:43:54.610282 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f\": container with ID starting with 067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f not found: ID does not exist" containerID="067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.610370 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f"} err="failed to get container status \"067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f\": rpc error: code = NotFound desc = could not find container \"067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f\": container with ID starting with 067a2a4b76493e255fc139dc2e09483c6cf82168affe768c85fd15783956661f not found: ID does not exist" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.610405 5017 scope.go:117] "RemoveContainer" containerID="a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93" Jan 29 07:43:54 crc kubenswrapper[5017]: E0129 07:43:54.611205 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93\": container with ID starting with a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93 not found: ID does not exist" containerID="a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.611247 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93"} err="failed to get container status \"a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93\": rpc error: code = NotFound desc = could not find container \"a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93\": container with ID starting with a607c454d0e33c4591338f858ca6520fa16dc0762029326bc9d44ce89b97ab93 not found: ID does not exist" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.640263 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb8cz\" (UniqueName: \"kubernetes.io/projected/8844c8da-85f3-4983-ac54-cb1f84673bba-kube-api-access-sb8cz\") on node \"crc\" DevicePath \"\"" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.640295 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.680877 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8844c8da-85f3-4983-ac54-cb1f84673bba" (UID: "8844c8da-85f3-4983-ac54-cb1f84673bba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.741771 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8844c8da-85f3-4983-ac54-cb1f84673bba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.867537 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nnc2s"] Jan 29 07:43:54 crc kubenswrapper[5017]: I0129 07:43:54.876031 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nnc2s"] Jan 29 07:43:56 crc kubenswrapper[5017]: I0129 07:43:56.330825 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" path="/var/lib/kubelet/pods/8844c8da-85f3-4983-ac54-cb1f84673bba/volumes" Jan 29 07:43:56 crc kubenswrapper[5017]: I0129 07:43:56.538783 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:43:56 crc kubenswrapper[5017]: I0129 07:43:56.539211 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:43:56 crc kubenswrapper[5017]: I0129 07:43:56.539267 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:43:56 crc kubenswrapper[5017]: I0129 07:43:56.540135 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb998d5ae2b03c78419bca96658b161f9df20e5df45dc5009deb04c807457a71"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:43:56 crc kubenswrapper[5017]: I0129 07:43:56.540211 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://eb998d5ae2b03c78419bca96658b161f9df20e5df45dc5009deb04c807457a71" gracePeriod=600 Jan 29 07:43:57 crc kubenswrapper[5017]: I0129 07:43:57.549942 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="eb998d5ae2b03c78419bca96658b161f9df20e5df45dc5009deb04c807457a71" exitCode=0 Jan 29 07:43:57 crc kubenswrapper[5017]: I0129 07:43:57.550034 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"eb998d5ae2b03c78419bca96658b161f9df20e5df45dc5009deb04c807457a71"} Jan 29 07:43:57 crc kubenswrapper[5017]: I0129 07:43:57.550576 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25"} Jan 29 07:43:57 crc kubenswrapper[5017]: I0129 07:43:57.550616 5017 scope.go:117] "RemoveContainer" containerID="5e01b62b8a4eb7ca96687f9004d513e2db29ba172596fb145d9c596074e6a5b5" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.217384 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65z4r"] Jan 29 07:44:57 crc kubenswrapper[5017]: E0129 07:44:57.218778 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerName="extract-content" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.218800 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerName="extract-content" Jan 29 07:44:57 crc kubenswrapper[5017]: E0129 07:44:57.218824 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerName="extract-utilities" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.218833 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerName="extract-utilities" Jan 29 07:44:57 crc kubenswrapper[5017]: E0129 07:44:57.218848 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerName="registry-server" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.218856 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerName="registry-server" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.219087 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8844c8da-85f3-4983-ac54-cb1f84673bba" containerName="registry-server" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.220581 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.240188 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65z4r"] Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.390149 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgdhh\" (UniqueName: \"kubernetes.io/projected/4ae2106b-b9c8-43e9-9a0e-64720255c4df-kube-api-access-jgdhh\") pod \"certified-operators-65z4r\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.390261 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-utilities\") pod \"certified-operators-65z4r\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.390293 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-catalog-content\") pod \"certified-operators-65z4r\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.491979 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-utilities\") pod \"certified-operators-65z4r\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.492395 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-catalog-content\") pod \"certified-operators-65z4r\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.492664 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgdhh\" (UniqueName: \"kubernetes.io/projected/4ae2106b-b9c8-43e9-9a0e-64720255c4df-kube-api-access-jgdhh\") pod \"certified-operators-65z4r\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.492705 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-utilities\") pod \"certified-operators-65z4r\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.494201 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-catalog-content\") pod \"certified-operators-65z4r\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.525633 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgdhh\" (UniqueName: \"kubernetes.io/projected/4ae2106b-b9c8-43e9-9a0e-64720255c4df-kube-api-access-jgdhh\") pod \"certified-operators-65z4r\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:57 crc kubenswrapper[5017]: I0129 07:44:57.539479 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:44:58 crc kubenswrapper[5017]: I0129 07:44:58.086706 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65z4r"] Jan 29 07:44:59 crc kubenswrapper[5017]: I0129 07:44:59.040715 5017 generic.go:334] "Generic (PLEG): container finished" podID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerID="969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547" exitCode=0 Jan 29 07:44:59 crc kubenswrapper[5017]: I0129 07:44:59.040830 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65z4r" event={"ID":"4ae2106b-b9c8-43e9-9a0e-64720255c4df","Type":"ContainerDied","Data":"969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547"} Jan 29 07:44:59 crc kubenswrapper[5017]: I0129 07:44:59.041221 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65z4r" event={"ID":"4ae2106b-b9c8-43e9-9a0e-64720255c4df","Type":"ContainerStarted","Data":"c73cd187e6ed86b31075e927bde57762169fd0d9eae01dafdfa6575aa4101c4f"} Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.052992 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65z4r" event={"ID":"4ae2106b-b9c8-43e9-9a0e-64720255c4df","Type":"ContainerStarted","Data":"5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577"} Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.202574 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb"] Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.203931 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.206613 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.215840 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb"] Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.218116 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.271165 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e1fed1-9cd5-4da5-8b48-390307883cff-config-volume\") pod \"collect-profiles-29494545-mhxnb\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.271244 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3e1fed1-9cd5-4da5-8b48-390307883cff-secret-volume\") pod \"collect-profiles-29494545-mhxnb\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.271332 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmm6z\" (UniqueName: \"kubernetes.io/projected/d3e1fed1-9cd5-4da5-8b48-390307883cff-kube-api-access-kmm6z\") pod \"collect-profiles-29494545-mhxnb\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.373065 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmm6z\" (UniqueName: \"kubernetes.io/projected/d3e1fed1-9cd5-4da5-8b48-390307883cff-kube-api-access-kmm6z\") pod \"collect-profiles-29494545-mhxnb\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.373227 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e1fed1-9cd5-4da5-8b48-390307883cff-config-volume\") pod \"collect-profiles-29494545-mhxnb\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.373278 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3e1fed1-9cd5-4da5-8b48-390307883cff-secret-volume\") pod \"collect-profiles-29494545-mhxnb\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.374659 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e1fed1-9cd5-4da5-8b48-390307883cff-config-volume\") pod \"collect-profiles-29494545-mhxnb\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.383815 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3e1fed1-9cd5-4da5-8b48-390307883cff-secret-volume\") pod \"collect-profiles-29494545-mhxnb\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.394565 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmm6z\" (UniqueName: \"kubernetes.io/projected/d3e1fed1-9cd5-4da5-8b48-390307883cff-kube-api-access-kmm6z\") pod \"collect-profiles-29494545-mhxnb\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:00 crc kubenswrapper[5017]: I0129 07:45:00.522082 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:01 crc kubenswrapper[5017]: I0129 07:45:01.043793 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb"] Jan 29 07:45:01 crc kubenswrapper[5017]: I0129 07:45:01.063257 5017 generic.go:334] "Generic (PLEG): container finished" podID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerID="5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577" exitCode=0 Jan 29 07:45:01 crc kubenswrapper[5017]: I0129 07:45:01.063306 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65z4r" event={"ID":"4ae2106b-b9c8-43e9-9a0e-64720255c4df","Type":"ContainerDied","Data":"5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577"} Jan 29 07:45:02 crc kubenswrapper[5017]: I0129 07:45:02.072874 5017 generic.go:334] "Generic (PLEG): container finished" podID="d3e1fed1-9cd5-4da5-8b48-390307883cff" containerID="c98aee6ff14f228c2ae16b620bccf2f7b7af5686620b8ff627b8ee49dba7b7a5" exitCode=0 Jan 29 07:45:02 crc kubenswrapper[5017]: I0129 07:45:02.072983 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" event={"ID":"d3e1fed1-9cd5-4da5-8b48-390307883cff","Type":"ContainerDied","Data":"c98aee6ff14f228c2ae16b620bccf2f7b7af5686620b8ff627b8ee49dba7b7a5"} Jan 29 07:45:02 crc kubenswrapper[5017]: I0129 07:45:02.073363 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" event={"ID":"d3e1fed1-9cd5-4da5-8b48-390307883cff","Type":"ContainerStarted","Data":"a39c86b3348b293d0f654a7233c405bef727af09a4e0b9c02fee1ef4880108fc"} Jan 29 07:45:02 crc kubenswrapper[5017]: I0129 07:45:02.076645 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65z4r" event={"ID":"4ae2106b-b9c8-43e9-9a0e-64720255c4df","Type":"ContainerStarted","Data":"f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15"} Jan 29 07:45:02 crc kubenswrapper[5017]: I0129 07:45:02.112691 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65z4r" podStartSLOduration=2.47391649 podStartE2EDuration="5.112661411s" podCreationTimestamp="2026-01-29 07:44:57 +0000 UTC" firstStartedPulling="2026-01-29 07:44:59.04372874 +0000 UTC m=+4185.418176350" lastFinishedPulling="2026-01-29 07:45:01.682473661 +0000 UTC m=+4188.056921271" observedRunningTime="2026-01-29 07:45:02.107920785 +0000 UTC m=+4188.482368395" watchObservedRunningTime="2026-01-29 07:45:02.112661411 +0000 UTC m=+4188.487109021" Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.342743 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.431139 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e1fed1-9cd5-4da5-8b48-390307883cff-config-volume\") pod \"d3e1fed1-9cd5-4da5-8b48-390307883cff\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.431255 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmm6z\" (UniqueName: \"kubernetes.io/projected/d3e1fed1-9cd5-4da5-8b48-390307883cff-kube-api-access-kmm6z\") pod \"d3e1fed1-9cd5-4da5-8b48-390307883cff\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.431282 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3e1fed1-9cd5-4da5-8b48-390307883cff-secret-volume\") pod \"d3e1fed1-9cd5-4da5-8b48-390307883cff\" (UID: \"d3e1fed1-9cd5-4da5-8b48-390307883cff\") " Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.432194 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e1fed1-9cd5-4da5-8b48-390307883cff-config-volume" (OuterVolumeSpecName: "config-volume") pod "d3e1fed1-9cd5-4da5-8b48-390307883cff" (UID: "d3e1fed1-9cd5-4da5-8b48-390307883cff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.442465 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e1fed1-9cd5-4da5-8b48-390307883cff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d3e1fed1-9cd5-4da5-8b48-390307883cff" (UID: "d3e1fed1-9cd5-4da5-8b48-390307883cff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.442499 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e1fed1-9cd5-4da5-8b48-390307883cff-kube-api-access-kmm6z" (OuterVolumeSpecName: "kube-api-access-kmm6z") pod "d3e1fed1-9cd5-4da5-8b48-390307883cff" (UID: "d3e1fed1-9cd5-4da5-8b48-390307883cff"). InnerVolumeSpecName "kube-api-access-kmm6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.533356 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmm6z\" (UniqueName: \"kubernetes.io/projected/d3e1fed1-9cd5-4da5-8b48-390307883cff-kube-api-access-kmm6z\") on node \"crc\" DevicePath \"\"" Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.533401 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d3e1fed1-9cd5-4da5-8b48-390307883cff-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:45:03 crc kubenswrapper[5017]: I0129 07:45:03.533416 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e1fed1-9cd5-4da5-8b48-390307883cff-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:45:04 crc kubenswrapper[5017]: I0129 07:45:04.103014 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" event={"ID":"d3e1fed1-9cd5-4da5-8b48-390307883cff","Type":"ContainerDied","Data":"a39c86b3348b293d0f654a7233c405bef727af09a4e0b9c02fee1ef4880108fc"} Jan 29 07:45:04 crc kubenswrapper[5017]: I0129 07:45:04.103084 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a39c86b3348b293d0f654a7233c405bef727af09a4e0b9c02fee1ef4880108fc" Jan 29 07:45:04 crc kubenswrapper[5017]: I0129 07:45:04.103274 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb" Jan 29 07:45:04 crc kubenswrapper[5017]: I0129 07:45:04.420881 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz"] Jan 29 07:45:04 crc kubenswrapper[5017]: I0129 07:45:04.426894 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-b5nnz"] Jan 29 07:45:06 crc kubenswrapper[5017]: I0129 07:45:06.327343 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59d62406-d251-4109-8b53-199276f89853" path="/var/lib/kubelet/pods/59d62406-d251-4109-8b53-199276f89853/volumes" Jan 29 07:45:07 crc kubenswrapper[5017]: I0129 07:45:07.540319 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:45:07 crc kubenswrapper[5017]: I0129 07:45:07.540385 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:45:07 crc kubenswrapper[5017]: I0129 07:45:07.720162 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:45:08 crc kubenswrapper[5017]: I0129 07:45:08.183216 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:45:08 crc kubenswrapper[5017]: I0129 07:45:08.239190 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65z4r"] Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.154581 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-65z4r" podUID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerName="registry-server" containerID="cri-o://f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15" gracePeriod=2 Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.635803 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.764601 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgdhh\" (UniqueName: \"kubernetes.io/projected/4ae2106b-b9c8-43e9-9a0e-64720255c4df-kube-api-access-jgdhh\") pod \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.764706 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-utilities\") pod \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.764809 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-catalog-content\") pod \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\" (UID: \"4ae2106b-b9c8-43e9-9a0e-64720255c4df\") " Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.766075 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-utilities" (OuterVolumeSpecName: "utilities") pod "4ae2106b-b9c8-43e9-9a0e-64720255c4df" (UID: "4ae2106b-b9c8-43e9-9a0e-64720255c4df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.771518 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae2106b-b9c8-43e9-9a0e-64720255c4df-kube-api-access-jgdhh" (OuterVolumeSpecName: "kube-api-access-jgdhh") pod "4ae2106b-b9c8-43e9-9a0e-64720255c4df" (UID: "4ae2106b-b9c8-43e9-9a0e-64720255c4df"). InnerVolumeSpecName "kube-api-access-jgdhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.817752 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ae2106b-b9c8-43e9-9a0e-64720255c4df" (UID: "4ae2106b-b9c8-43e9-9a0e-64720255c4df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.866926 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.866985 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ae2106b-b9c8-43e9-9a0e-64720255c4df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:45:10 crc kubenswrapper[5017]: I0129 07:45:10.867000 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgdhh\" (UniqueName: \"kubernetes.io/projected/4ae2106b-b9c8-43e9-9a0e-64720255c4df-kube-api-access-jgdhh\") on node \"crc\" DevicePath \"\"" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.166779 5017 generic.go:334] "Generic (PLEG): container finished" podID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerID="f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15" exitCode=0 Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.166832 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65z4r" event={"ID":"4ae2106b-b9c8-43e9-9a0e-64720255c4df","Type":"ContainerDied","Data":"f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15"} Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.166855 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65z4r" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.166894 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65z4r" event={"ID":"4ae2106b-b9c8-43e9-9a0e-64720255c4df","Type":"ContainerDied","Data":"c73cd187e6ed86b31075e927bde57762169fd0d9eae01dafdfa6575aa4101c4f"} Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.166925 5017 scope.go:117] "RemoveContainer" containerID="f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.193973 5017 scope.go:117] "RemoveContainer" containerID="5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.203932 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65z4r"] Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.213358 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-65z4r"] Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.220010 5017 scope.go:117] "RemoveContainer" containerID="969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.243255 5017 scope.go:117] "RemoveContainer" containerID="f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15" Jan 29 07:45:11 crc kubenswrapper[5017]: E0129 07:45:11.243776 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15\": container with ID starting with f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15 not found: ID does not exist" containerID="f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.243814 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15"} err="failed to get container status \"f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15\": rpc error: code = NotFound desc = could not find container \"f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15\": container with ID starting with f23d7875f43a841e17e3257748f814e6a7ad9e568f26e5886b2f8e45a6e98c15 not found: ID does not exist" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.243845 5017 scope.go:117] "RemoveContainer" containerID="5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577" Jan 29 07:45:11 crc kubenswrapper[5017]: E0129 07:45:11.244648 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577\": container with ID starting with 5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577 not found: ID does not exist" containerID="5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.244700 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577"} err="failed to get container status \"5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577\": rpc error: code = NotFound desc = could not find container \"5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577\": container with ID starting with 5e84bcc9b3dcddda4fcdd1b732b89b0a2624ffc51c2a714bb04d0909a0a8b577 not found: ID does not exist" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.244739 5017 scope.go:117] "RemoveContainer" containerID="969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547" Jan 29 07:45:11 crc kubenswrapper[5017]: E0129 07:45:11.245075 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547\": container with ID starting with 969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547 not found: ID does not exist" containerID="969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547" Jan 29 07:45:11 crc kubenswrapper[5017]: I0129 07:45:11.245098 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547"} err="failed to get container status \"969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547\": rpc error: code = NotFound desc = could not find container \"969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547\": container with ID starting with 969e10350177fad775cdaa133ebac9fb662afb271585ff957a72b739d703f547 not found: ID does not exist" Jan 29 07:45:12 crc kubenswrapper[5017]: I0129 07:45:12.325779 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" path="/var/lib/kubelet/pods/4ae2106b-b9c8-43e9-9a0e-64720255c4df/volumes" Jan 29 07:45:22 crc kubenswrapper[5017]: I0129 07:45:22.798383 5017 scope.go:117] "RemoveContainer" containerID="589514c38a1db2f0cf875619ab13082e9ba23e54a5e90693d464e405f2c436dc" Jan 29 07:45:56 crc kubenswrapper[5017]: I0129 07:45:56.538881 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:45:56 crc kubenswrapper[5017]: I0129 07:45:56.539659 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:46:26 crc kubenswrapper[5017]: I0129 07:46:26.539669 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:46:26 crc kubenswrapper[5017]: I0129 07:46:26.540666 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:46:56 crc kubenswrapper[5017]: I0129 07:46:56.539321 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:46:56 crc kubenswrapper[5017]: I0129 07:46:56.540067 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:46:56 crc kubenswrapper[5017]: I0129 07:46:56.540136 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:46:56 crc kubenswrapper[5017]: I0129 07:46:56.541575 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:46:56 crc kubenswrapper[5017]: I0129 07:46:56.541753 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" gracePeriod=600 Jan 29 07:46:56 crc kubenswrapper[5017]: E0129 07:46:56.666826 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:46:57 crc kubenswrapper[5017]: I0129 07:46:57.019528 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" exitCode=0 Jan 29 07:46:57 crc kubenswrapper[5017]: I0129 07:46:57.019651 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25"} Jan 29 07:46:57 crc kubenswrapper[5017]: I0129 07:46:57.019851 5017 scope.go:117] "RemoveContainer" containerID="eb998d5ae2b03c78419bca96658b161f9df20e5df45dc5009deb04c807457a71" Jan 29 07:46:57 crc kubenswrapper[5017]: I0129 07:46:57.021464 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:46:57 crc kubenswrapper[5017]: E0129 07:46:57.021897 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:47:11 crc kubenswrapper[5017]: I0129 07:47:11.316720 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:47:11 crc kubenswrapper[5017]: E0129 07:47:11.318022 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:47:22 crc kubenswrapper[5017]: I0129 07:47:22.316757 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:47:22 crc kubenswrapper[5017]: E0129 07:47:22.317887 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:47:36 crc kubenswrapper[5017]: I0129 07:47:36.316657 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:47:36 crc kubenswrapper[5017]: E0129 07:47:36.317722 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:47:51 crc kubenswrapper[5017]: I0129 07:47:51.317612 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:47:51 crc kubenswrapper[5017]: E0129 07:47:51.318602 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:48:04 crc kubenswrapper[5017]: I0129 07:48:04.321667 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:48:04 crc kubenswrapper[5017]: E0129 07:48:04.322899 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:48:15 crc kubenswrapper[5017]: I0129 07:48:15.316717 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:48:15 crc kubenswrapper[5017]: E0129 07:48:15.317823 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:48:22 crc kubenswrapper[5017]: I0129 07:48:22.899354 5017 scope.go:117] "RemoveContainer" containerID="7f5d35d414c479009feaea65310508899b3db0dfb85121ba9e1fd06f0c3e5fa6" Jan 29 07:48:22 crc kubenswrapper[5017]: I0129 07:48:22.922481 5017 scope.go:117] "RemoveContainer" containerID="70f2e9cde3f4153973bfd0f7ff700a72eac88234927a74a5b41bb1320d62012c" Jan 29 07:48:22 crc kubenswrapper[5017]: I0129 07:48:22.941571 5017 scope.go:117] "RemoveContainer" containerID="89e2b3dd751c4be0aed20ad163ad5abd33e10ed73737efb7c146c06dfc6af2cf" Jan 29 07:48:26 crc kubenswrapper[5017]: I0129 07:48:26.318366 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:48:26 crc kubenswrapper[5017]: E0129 07:48:26.319596 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:48:37 crc kubenswrapper[5017]: I0129 07:48:37.316235 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:48:37 crc kubenswrapper[5017]: E0129 07:48:37.317574 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:48:50 crc kubenswrapper[5017]: I0129 07:48:50.316712 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:48:50 crc kubenswrapper[5017]: E0129 07:48:50.317713 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.030565 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-hdf2s"] Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.036942 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-hdf2s"] Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.150772 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-xftgv"] Jan 29 07:48:59 crc kubenswrapper[5017]: E0129 07:48:59.151120 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerName="extract-content" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.151135 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerName="extract-content" Jan 29 07:48:59 crc kubenswrapper[5017]: E0129 07:48:59.151159 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerName="registry-server" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.151167 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerName="registry-server" Jan 29 07:48:59 crc kubenswrapper[5017]: E0129 07:48:59.151178 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e1fed1-9cd5-4da5-8b48-390307883cff" containerName="collect-profiles" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.151185 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e1fed1-9cd5-4da5-8b48-390307883cff" containerName="collect-profiles" Jan 29 07:48:59 crc kubenswrapper[5017]: E0129 07:48:59.151205 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerName="extract-utilities" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.151228 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerName="extract-utilities" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.151387 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e1fed1-9cd5-4da5-8b48-390307883cff" containerName="collect-profiles" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.151404 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae2106b-b9c8-43e9-9a0e-64720255c4df" containerName="registry-server" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.151930 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.157387 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.157456 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.157500 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.158742 5017 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-bz6ff" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.169844 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xftgv"] Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.222188 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdt78\" (UniqueName: \"kubernetes.io/projected/5271f867-12e8-4fab-86d5-d700cf911ede-kube-api-access-hdt78\") pod \"crc-storage-crc-xftgv\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.222264 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5271f867-12e8-4fab-86d5-d700cf911ede-crc-storage\") pod \"crc-storage-crc-xftgv\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.222496 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5271f867-12e8-4fab-86d5-d700cf911ede-node-mnt\") pod \"crc-storage-crc-xftgv\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.323341 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5271f867-12e8-4fab-86d5-d700cf911ede-crc-storage\") pod \"crc-storage-crc-xftgv\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.323407 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5271f867-12e8-4fab-86d5-d700cf911ede-node-mnt\") pod \"crc-storage-crc-xftgv\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.323783 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdt78\" (UniqueName: \"kubernetes.io/projected/5271f867-12e8-4fab-86d5-d700cf911ede-kube-api-access-hdt78\") pod \"crc-storage-crc-xftgv\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.323832 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5271f867-12e8-4fab-86d5-d700cf911ede-node-mnt\") pod \"crc-storage-crc-xftgv\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.324383 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5271f867-12e8-4fab-86d5-d700cf911ede-crc-storage\") pod \"crc-storage-crc-xftgv\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.347889 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdt78\" (UniqueName: \"kubernetes.io/projected/5271f867-12e8-4fab-86d5-d700cf911ede-kube-api-access-hdt78\") pod \"crc-storage-crc-xftgv\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.482259 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.787633 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:48:59 crc kubenswrapper[5017]: I0129 07:48:59.791202 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xftgv"] Jan 29 07:49:00 crc kubenswrapper[5017]: I0129 07:49:00.048371 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xftgv" event={"ID":"5271f867-12e8-4fab-86d5-d700cf911ede","Type":"ContainerStarted","Data":"e3369b160378d766d462314cccc11a35d334062b638eb7f87a5ee8c9f5db2b9d"} Jan 29 07:49:00 crc kubenswrapper[5017]: I0129 07:49:00.334049 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150cf209-f2d1-4a9a-b965-cd5c4f41106f" path="/var/lib/kubelet/pods/150cf209-f2d1-4a9a-b965-cd5c4f41106f/volumes" Jan 29 07:49:01 crc kubenswrapper[5017]: I0129 07:49:01.064065 5017 generic.go:334] "Generic (PLEG): container finished" podID="5271f867-12e8-4fab-86d5-d700cf911ede" containerID="fb3bfd7444f0544e3eb2675f3a7ef90731a676452b1ccf1ffe46ef0e9bbebe60" exitCode=0 Jan 29 07:49:01 crc kubenswrapper[5017]: I0129 07:49:01.064158 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xftgv" event={"ID":"5271f867-12e8-4fab-86d5-d700cf911ede","Type":"ContainerDied","Data":"fb3bfd7444f0544e3eb2675f3a7ef90731a676452b1ccf1ffe46ef0e9bbebe60"} Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.316446 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:49:02 crc kubenswrapper[5017]: E0129 07:49:02.317376 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.387881 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.585699 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5271f867-12e8-4fab-86d5-d700cf911ede-crc-storage\") pod \"5271f867-12e8-4fab-86d5-d700cf911ede\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.586183 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdt78\" (UniqueName: \"kubernetes.io/projected/5271f867-12e8-4fab-86d5-d700cf911ede-kube-api-access-hdt78\") pod \"5271f867-12e8-4fab-86d5-d700cf911ede\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.586368 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5271f867-12e8-4fab-86d5-d700cf911ede-node-mnt\") pod \"5271f867-12e8-4fab-86d5-d700cf911ede\" (UID: \"5271f867-12e8-4fab-86d5-d700cf911ede\") " Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.586769 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5271f867-12e8-4fab-86d5-d700cf911ede-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "5271f867-12e8-4fab-86d5-d700cf911ede" (UID: "5271f867-12e8-4fab-86d5-d700cf911ede"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.593899 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5271f867-12e8-4fab-86d5-d700cf911ede-kube-api-access-hdt78" (OuterVolumeSpecName: "kube-api-access-hdt78") pod "5271f867-12e8-4fab-86d5-d700cf911ede" (UID: "5271f867-12e8-4fab-86d5-d700cf911ede"). InnerVolumeSpecName "kube-api-access-hdt78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.611082 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5271f867-12e8-4fab-86d5-d700cf911ede-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "5271f867-12e8-4fab-86d5-d700cf911ede" (UID: "5271f867-12e8-4fab-86d5-d700cf911ede"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.688532 5017 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5271f867-12e8-4fab-86d5-d700cf911ede-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.688582 5017 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5271f867-12e8-4fab-86d5-d700cf911ede-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 07:49:02 crc kubenswrapper[5017]: I0129 07:49:02.689811 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdt78\" (UniqueName: \"kubernetes.io/projected/5271f867-12e8-4fab-86d5-d700cf911ede-kube-api-access-hdt78\") on node \"crc\" DevicePath \"\"" Jan 29 07:49:03 crc kubenswrapper[5017]: I0129 07:49:03.084390 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xftgv" event={"ID":"5271f867-12e8-4fab-86d5-d700cf911ede","Type":"ContainerDied","Data":"e3369b160378d766d462314cccc11a35d334062b638eb7f87a5ee8c9f5db2b9d"} Jan 29 07:49:03 crc kubenswrapper[5017]: I0129 07:49:03.084464 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3369b160378d766d462314cccc11a35d334062b638eb7f87a5ee8c9f5db2b9d" Jan 29 07:49:03 crc kubenswrapper[5017]: I0129 07:49:03.084604 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xftgv" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.744663 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-xftgv"] Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.752340 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-xftgv"] Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.884467 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-hx9w6"] Jan 29 07:49:04 crc kubenswrapper[5017]: E0129 07:49:04.884889 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5271f867-12e8-4fab-86d5-d700cf911ede" containerName="storage" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.884910 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5271f867-12e8-4fab-86d5-d700cf911ede" containerName="storage" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.885067 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="5271f867-12e8-4fab-86d5-d700cf911ede" containerName="storage" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.885664 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.889489 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.889575 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.889746 5017 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-bz6ff" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.889893 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.900823 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hx9w6"] Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.931636 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d4e117a-b788-42fa-94d4-1b8218ab2e67-crc-storage\") pod \"crc-storage-crc-hx9w6\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.931694 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d4e117a-b788-42fa-94d4-1b8218ab2e67-node-mnt\") pod \"crc-storage-crc-hx9w6\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:04 crc kubenswrapper[5017]: I0129 07:49:04.931832 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7qk8\" (UniqueName: \"kubernetes.io/projected/8d4e117a-b788-42fa-94d4-1b8218ab2e67-kube-api-access-h7qk8\") pod \"crc-storage-crc-hx9w6\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:05 crc kubenswrapper[5017]: I0129 07:49:05.033850 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d4e117a-b788-42fa-94d4-1b8218ab2e67-crc-storage\") pod \"crc-storage-crc-hx9w6\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:05 crc kubenswrapper[5017]: I0129 07:49:05.033932 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d4e117a-b788-42fa-94d4-1b8218ab2e67-node-mnt\") pod \"crc-storage-crc-hx9w6\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:05 crc kubenswrapper[5017]: I0129 07:49:05.034008 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7qk8\" (UniqueName: \"kubernetes.io/projected/8d4e117a-b788-42fa-94d4-1b8218ab2e67-kube-api-access-h7qk8\") pod \"crc-storage-crc-hx9w6\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:05 crc kubenswrapper[5017]: I0129 07:49:05.034411 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d4e117a-b788-42fa-94d4-1b8218ab2e67-node-mnt\") pod \"crc-storage-crc-hx9w6\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:05 crc kubenswrapper[5017]: I0129 07:49:05.034891 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d4e117a-b788-42fa-94d4-1b8218ab2e67-crc-storage\") pod \"crc-storage-crc-hx9w6\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:05 crc kubenswrapper[5017]: I0129 07:49:05.053223 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7qk8\" (UniqueName: \"kubernetes.io/projected/8d4e117a-b788-42fa-94d4-1b8218ab2e67-kube-api-access-h7qk8\") pod \"crc-storage-crc-hx9w6\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:05 crc kubenswrapper[5017]: I0129 07:49:05.202423 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:05 crc kubenswrapper[5017]: I0129 07:49:05.674804 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hx9w6"] Jan 29 07:49:06 crc kubenswrapper[5017]: I0129 07:49:06.109753 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hx9w6" event={"ID":"8d4e117a-b788-42fa-94d4-1b8218ab2e67","Type":"ContainerStarted","Data":"2adf6cab5d230600861cef4a1a0f9d74e350786ab877cf63fc999ed51337050f"} Jan 29 07:49:06 crc kubenswrapper[5017]: I0129 07:49:06.327101 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5271f867-12e8-4fab-86d5-d700cf911ede" path="/var/lib/kubelet/pods/5271f867-12e8-4fab-86d5-d700cf911ede/volumes" Jan 29 07:49:07 crc kubenswrapper[5017]: I0129 07:49:07.120799 5017 generic.go:334] "Generic (PLEG): container finished" podID="8d4e117a-b788-42fa-94d4-1b8218ab2e67" containerID="fed197b22bacdd8be394d0e2953722d728b5cf26fc56210dc49f0d7efc5ad2d8" exitCode=0 Jan 29 07:49:07 crc kubenswrapper[5017]: I0129 07:49:07.120922 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hx9w6" event={"ID":"8d4e117a-b788-42fa-94d4-1b8218ab2e67","Type":"ContainerDied","Data":"fed197b22bacdd8be394d0e2953722d728b5cf26fc56210dc49f0d7efc5ad2d8"} Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.445060 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.490979 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7qk8\" (UniqueName: \"kubernetes.io/projected/8d4e117a-b788-42fa-94d4-1b8218ab2e67-kube-api-access-h7qk8\") pod \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.491216 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d4e117a-b788-42fa-94d4-1b8218ab2e67-crc-storage\") pod \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.491323 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d4e117a-b788-42fa-94d4-1b8218ab2e67-node-mnt\") pod \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\" (UID: \"8d4e117a-b788-42fa-94d4-1b8218ab2e67\") " Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.491663 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d4e117a-b788-42fa-94d4-1b8218ab2e67-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "8d4e117a-b788-42fa-94d4-1b8218ab2e67" (UID: "8d4e117a-b788-42fa-94d4-1b8218ab2e67"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.500908 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d4e117a-b788-42fa-94d4-1b8218ab2e67-kube-api-access-h7qk8" (OuterVolumeSpecName: "kube-api-access-h7qk8") pod "8d4e117a-b788-42fa-94d4-1b8218ab2e67" (UID: "8d4e117a-b788-42fa-94d4-1b8218ab2e67"). InnerVolumeSpecName "kube-api-access-h7qk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.522491 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d4e117a-b788-42fa-94d4-1b8218ab2e67-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "8d4e117a-b788-42fa-94d4-1b8218ab2e67" (UID: "8d4e117a-b788-42fa-94d4-1b8218ab2e67"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.592563 5017 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8d4e117a-b788-42fa-94d4-1b8218ab2e67-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.593081 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7qk8\" (UniqueName: \"kubernetes.io/projected/8d4e117a-b788-42fa-94d4-1b8218ab2e67-kube-api-access-h7qk8\") on node \"crc\" DevicePath \"\"" Jan 29 07:49:08 crc kubenswrapper[5017]: I0129 07:49:08.593094 5017 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8d4e117a-b788-42fa-94d4-1b8218ab2e67-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 07:49:09 crc kubenswrapper[5017]: I0129 07:49:09.152918 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hx9w6" event={"ID":"8d4e117a-b788-42fa-94d4-1b8218ab2e67","Type":"ContainerDied","Data":"2adf6cab5d230600861cef4a1a0f9d74e350786ab877cf63fc999ed51337050f"} Jan 29 07:49:09 crc kubenswrapper[5017]: I0129 07:49:09.153002 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2adf6cab5d230600861cef4a1a0f9d74e350786ab877cf63fc999ed51337050f" Jan 29 07:49:09 crc kubenswrapper[5017]: I0129 07:49:09.153007 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hx9w6" Jan 29 07:49:15 crc kubenswrapper[5017]: I0129 07:49:15.316755 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:49:15 crc kubenswrapper[5017]: E0129 07:49:15.317443 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:49:23 crc kubenswrapper[5017]: I0129 07:49:23.016663 5017 scope.go:117] "RemoveContainer" containerID="b20f12e3c00f15d4ce71f1344cfa33b544ebcf307d9492ad38717f36f5cb8017" Jan 29 07:49:27 crc kubenswrapper[5017]: I0129 07:49:27.316648 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:49:27 crc kubenswrapper[5017]: E0129 07:49:27.317730 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:49:38 crc kubenswrapper[5017]: I0129 07:49:38.316944 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:49:38 crc kubenswrapper[5017]: E0129 07:49:38.318025 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:49:50 crc kubenswrapper[5017]: I0129 07:49:50.316567 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:49:50 crc kubenswrapper[5017]: E0129 07:49:50.317617 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:50:03 crc kubenswrapper[5017]: I0129 07:50:03.316425 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:50:03 crc kubenswrapper[5017]: E0129 07:50:03.319484 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:50:16 crc kubenswrapper[5017]: I0129 07:50:16.318705 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:50:16 crc kubenswrapper[5017]: E0129 07:50:16.319735 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:50:31 crc kubenswrapper[5017]: I0129 07:50:31.316452 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:50:31 crc kubenswrapper[5017]: E0129 07:50:31.317493 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:50:46 crc kubenswrapper[5017]: I0129 07:50:46.316045 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:50:46 crc kubenswrapper[5017]: E0129 07:50:46.317169 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:51:01 crc kubenswrapper[5017]: I0129 07:51:01.316640 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:51:01 crc kubenswrapper[5017]: E0129 07:51:01.317796 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:51:12 crc kubenswrapper[5017]: I0129 07:51:12.316861 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:51:12 crc kubenswrapper[5017]: E0129 07:51:12.318079 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:51:27 crc kubenswrapper[5017]: I0129 07:51:27.316759 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:51:27 crc kubenswrapper[5017]: E0129 07:51:27.317773 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:51:39 crc kubenswrapper[5017]: I0129 07:51:39.316290 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:51:39 crc kubenswrapper[5017]: E0129 07:51:39.317450 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:51:54 crc kubenswrapper[5017]: I0129 07:51:54.321201 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:51:54 crc kubenswrapper[5017]: E0129 07:51:54.322372 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:52:09 crc kubenswrapper[5017]: I0129 07:52:09.316434 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:52:09 crc kubenswrapper[5017]: I0129 07:52:09.638808 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"566051f95af55c3167bfb24eecc73af401c9e2cd360bef8281baf73b9b65699e"} Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.793664 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95587bc99-mrdlm"] Jan 29 07:52:27 crc kubenswrapper[5017]: E0129 07:52:27.796179 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4e117a-b788-42fa-94d4-1b8218ab2e67" containerName="storage" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.796301 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4e117a-b788-42fa-94d4-1b8218ab2e67" containerName="storage" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.796593 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4e117a-b788-42fa-94d4-1b8218ab2e67" containerName="storage" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.797773 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.801343 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.801420 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.802896 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.803188 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zpd7r" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.804598 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.812314 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-mrdlm"] Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.952347 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-dns-svc\") pod \"dnsmasq-dns-95587bc99-mrdlm\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.952457 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-config\") pod \"dnsmasq-dns-95587bc99-mrdlm\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:27 crc kubenswrapper[5017]: I0129 07:52:27.952524 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8hlj\" (UniqueName: \"kubernetes.io/projected/ccf5cd60-96cd-454d-adac-537071b36e03-kube-api-access-k8hlj\") pod \"dnsmasq-dns-95587bc99-mrdlm\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.054115 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8hlj\" (UniqueName: \"kubernetes.io/projected/ccf5cd60-96cd-454d-adac-537071b36e03-kube-api-access-k8hlj\") pod \"dnsmasq-dns-95587bc99-mrdlm\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.054319 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-dns-svc\") pod \"dnsmasq-dns-95587bc99-mrdlm\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.054388 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-config\") pod \"dnsmasq-dns-95587bc99-mrdlm\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.055529 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-config\") pod \"dnsmasq-dns-95587bc99-mrdlm\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.055625 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-dns-svc\") pod \"dnsmasq-dns-95587bc99-mrdlm\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.084687 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8hlj\" (UniqueName: \"kubernetes.io/projected/ccf5cd60-96cd-454d-adac-537071b36e03-kube-api-access-k8hlj\") pod \"dnsmasq-dns-95587bc99-mrdlm\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.115264 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-g8qf5"] Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.117075 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.122705 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.146918 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-g8qf5"] Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.257561 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-config\") pod \"dnsmasq-dns-5d79f765b5-g8qf5\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.257639 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-g8qf5\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.257678 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7pzr\" (UniqueName: \"kubernetes.io/projected/ef6c1386-cf00-47bc-844a-9c2a52050ae4-kube-api-access-q7pzr\") pod \"dnsmasq-dns-5d79f765b5-g8qf5\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.359026 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-config\") pod \"dnsmasq-dns-5d79f765b5-g8qf5\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.359079 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-g8qf5\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.359103 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7pzr\" (UniqueName: \"kubernetes.io/projected/ef6c1386-cf00-47bc-844a-9c2a52050ae4-kube-api-access-q7pzr\") pod \"dnsmasq-dns-5d79f765b5-g8qf5\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.362981 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-g8qf5\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.363009 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-config\") pod \"dnsmasq-dns-5d79f765b5-g8qf5\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.384367 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7pzr\" (UniqueName: \"kubernetes.io/projected/ef6c1386-cf00-47bc-844a-9c2a52050ae4-kube-api-access-q7pzr\") pod \"dnsmasq-dns-5d79f765b5-g8qf5\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.440354 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.525303 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-mrdlm"] Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.794815 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" event={"ID":"ccf5cd60-96cd-454d-adac-537071b36e03","Type":"ContainerStarted","Data":"37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb"} Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.795275 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" event={"ID":"ccf5cd60-96cd-454d-adac-537071b36e03","Type":"ContainerStarted","Data":"daeb4e2ff8b0ab1d08bd1807a276c75df7f27339ea7ab42f7286bf6fcb6404bf"} Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.925533 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.926726 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.929167 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.929412 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.929432 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q6trs" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.930110 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.930357 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 07:52:28 crc kubenswrapper[5017]: I0129 07:52:28.947440 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.069266 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-g8qf5"] Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.074290 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.074343 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.074372 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86dda784-351d-4bef-8daa-893cbc405934-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.074393 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.074420 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.074447 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.074474 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.074502 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dcp2\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-kube-api-access-7dcp2\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.074550 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86dda784-351d-4bef-8daa-893cbc405934-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.175955 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.176057 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.176094 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86dda784-351d-4bef-8daa-893cbc405934-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.176118 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.176155 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.176184 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.176218 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.176255 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dcp2\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-kube-api-access-7dcp2\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.176320 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86dda784-351d-4bef-8daa-893cbc405934-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.176727 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.177246 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.178254 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.178687 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.180708 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86dda784-351d-4bef-8daa-893cbc405934-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.182622 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.183356 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.183411 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/22faddce87d0a1a5182aed12ec909295840853ffaa273b9024b5dc87691e16ec/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.185231 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86dda784-351d-4bef-8daa-893cbc405934-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.201826 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dcp2\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-kube-api-access-7dcp2\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.218726 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.220194 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.222813 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.223116 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zjxbz" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.234756 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.247403 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") pod \"rabbitmq-server-0\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.270658 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.281745 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.288524 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.288766 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.288951 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.289232 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8k2k9" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.289408 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.313234 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.385291 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386279 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386341 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386370 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13a33bc7-e8c6-4b03-820c-33912797c525-kolla-config\") pod \"memcached-0\" (UID: \"13a33bc7-e8c6-4b03-820c-33912797c525\") " pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386410 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386442 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386474 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkb6t\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-kube-api-access-zkb6t\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386493 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386542 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386589 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13a33bc7-e8c6-4b03-820c-33912797c525-config-data\") pod \"memcached-0\" (UID: \"13a33bc7-e8c6-4b03-820c-33912797c525\") " pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386634 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqqp\" (UniqueName: \"kubernetes.io/projected/13a33bc7-e8c6-4b03-820c-33912797c525-kube-api-access-msqqp\") pod \"memcached-0\" (UID: \"13a33bc7-e8c6-4b03-820c-33912797c525\") " pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.386657 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.488859 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.488943 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489005 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13a33bc7-e8c6-4b03-820c-33912797c525-kolla-config\") pod \"memcached-0\" (UID: \"13a33bc7-e8c6-4b03-820c-33912797c525\") " pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489061 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489097 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489127 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkb6t\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-kube-api-access-zkb6t\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489158 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489196 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489281 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13a33bc7-e8c6-4b03-820c-33912797c525-config-data\") pod \"memcached-0\" (UID: \"13a33bc7-e8c6-4b03-820c-33912797c525\") " pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489321 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqqp\" (UniqueName: \"kubernetes.io/projected/13a33bc7-e8c6-4b03-820c-33912797c525-kube-api-access-msqqp\") pod \"memcached-0\" (UID: \"13a33bc7-e8c6-4b03-820c-33912797c525\") " pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489348 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.489413 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.491072 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13a33bc7-e8c6-4b03-820c-33912797c525-config-data\") pod \"memcached-0\" (UID: \"13a33bc7-e8c6-4b03-820c-33912797c525\") " pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.493664 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.494614 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.495472 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.495747 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.496618 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.496365 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13a33bc7-e8c6-4b03-820c-33912797c525-kolla-config\") pod \"memcached-0\" (UID: \"13a33bc7-e8c6-4b03-820c-33912797c525\") " pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.496577 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.499347 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.499393 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/76741904e1bce448ae9808ed33d8ddf821ea45d735ad33493dbbf09d8599ddb2/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.505778 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.514935 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkb6t\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-kube-api-access-zkb6t\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.522879 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msqqp\" (UniqueName: \"kubernetes.io/projected/13a33bc7-e8c6-4b03-820c-33912797c525-kube-api-access-msqqp\") pod \"memcached-0\" (UID: \"13a33bc7-e8c6-4b03-820c-33912797c525\") " pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.538406 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.549874 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.594296 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.622719 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.640711 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.647717 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.652755 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.653113 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xpphf" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.659541 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.659919 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.662250 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.670170 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.794428 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c81767-cb91-41ba-b305-3aaff087606c-kolla-config\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.794503 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c81767-cb91-41ba-b305-3aaff087606c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.794549 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74fb8eca-7a3c-4be5-96bc-c8e77f7aef00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74fb8eca-7a3c-4be5-96bc-c8e77f7aef00\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.797662 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ckpf\" (UniqueName: \"kubernetes.io/projected/01c81767-cb91-41ba-b305-3aaff087606c-kube-api-access-2ckpf\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.797771 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c81767-cb91-41ba-b305-3aaff087606c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.797917 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c81767-cb91-41ba-b305-3aaff087606c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.797937 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c81767-cb91-41ba-b305-3aaff087606c-config-data-default\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.798010 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c81767-cb91-41ba-b305-3aaff087606c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.812197 5017 generic.go:334] "Generic (PLEG): container finished" podID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" containerID="4504902d8f57a999218bba0f72aad4d1f776de06f9588ec37a43971bb5a297a0" exitCode=0 Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.812280 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" event={"ID":"ef6c1386-cf00-47bc-844a-9c2a52050ae4","Type":"ContainerDied","Data":"4504902d8f57a999218bba0f72aad4d1f776de06f9588ec37a43971bb5a297a0"} Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.812310 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" event={"ID":"ef6c1386-cf00-47bc-844a-9c2a52050ae4","Type":"ContainerStarted","Data":"6dcf71937717de556dc321fd90192c00885939d2ae61d120f28275baf041cc08"} Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.815917 5017 generic.go:334] "Generic (PLEG): container finished" podID="ccf5cd60-96cd-454d-adac-537071b36e03" containerID="37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb" exitCode=0 Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.815995 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" event={"ID":"ccf5cd60-96cd-454d-adac-537071b36e03","Type":"ContainerDied","Data":"37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb"} Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.900096 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c81767-cb91-41ba-b305-3aaff087606c-kolla-config\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.900650 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c81767-cb91-41ba-b305-3aaff087606c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.900698 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-74fb8eca-7a3c-4be5-96bc-c8e77f7aef00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74fb8eca-7a3c-4be5-96bc-c8e77f7aef00\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.900742 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ckpf\" (UniqueName: \"kubernetes.io/projected/01c81767-cb91-41ba-b305-3aaff087606c-kube-api-access-2ckpf\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.900795 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c81767-cb91-41ba-b305-3aaff087606c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.900859 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c81767-cb91-41ba-b305-3aaff087606c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.900877 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c81767-cb91-41ba-b305-3aaff087606c-config-data-default\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.900909 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c81767-cb91-41ba-b305-3aaff087606c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.902818 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c81767-cb91-41ba-b305-3aaff087606c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.903482 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01c81767-cb91-41ba-b305-3aaff087606c-kolla-config\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.903870 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01c81767-cb91-41ba-b305-3aaff087606c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.907776 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01c81767-cb91-41ba-b305-3aaff087606c-config-data-default\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.910071 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.910133 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-74fb8eca-7a3c-4be5-96bc-c8e77f7aef00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74fb8eca-7a3c-4be5-96bc-c8e77f7aef00\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6769f29119a4e5cad7dd88e27f83c9643e24d2a50dceb71ceb262a14a426fdf4/globalmount\"" pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.915541 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c81767-cb91-41ba-b305-3aaff087606c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.920557 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01c81767-cb91-41ba-b305-3aaff087606c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.928454 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ckpf\" (UniqueName: \"kubernetes.io/projected/01c81767-cb91-41ba-b305-3aaff087606c-kube-api-access-2ckpf\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.948384 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-74fb8eca-7a3c-4be5-96bc-c8e77f7aef00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74fb8eca-7a3c-4be5-96bc-c8e77f7aef00\") pod \"openstack-galera-0\" (UID: \"01c81767-cb91-41ba-b305-3aaff087606c\") " pod="openstack/openstack-galera-0" Jan 29 07:52:29 crc kubenswrapper[5017]: I0129 07:52:29.973728 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.096570 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:52:30 crc kubenswrapper[5017]: W0129 07:52:30.110662 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86dda784_351d_4bef_8daa_893cbc405934.slice/crio-da36a84db2a416af818227c315a2dc505b15e133c6458a0f864358e366b8ab95 WatchSource:0}: Error finding container da36a84db2a416af818227c315a2dc505b15e133c6458a0f864358e366b8ab95: Status 404 returned error can't find the container with id da36a84db2a416af818227c315a2dc505b15e133c6458a0f864358e366b8ab95 Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.228590 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 07:52:30 crc kubenswrapper[5017]: W0129 07:52:30.228811 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa66ed9c_1190_4d8f_9026_e2f02e13aef5.slice/crio-d4ca116e17dc3f1a4f3319c399d44701efdaa6a70d528e41912d8f139482e899 WatchSource:0}: Error finding container d4ca116e17dc3f1a4f3319c399d44701efdaa6a70d528e41912d8f139482e899: Status 404 returned error can't find the container with id d4ca116e17dc3f1a4f3319c399d44701efdaa6a70d528e41912d8f139482e899 Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.235864 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:52:30 crc kubenswrapper[5017]: W0129 07:52:30.241411 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13a33bc7_e8c6_4b03_820c_33912797c525.slice/crio-edddc8e1431aca5142c61bbbddea0e0942ae0fc957277cd68a55d7a7bbd46d4c WatchSource:0}: Error finding container edddc8e1431aca5142c61bbbddea0e0942ae0fc957277cd68a55d7a7bbd46d4c: Status 404 returned error can't find the container with id edddc8e1431aca5142c61bbbddea0e0942ae0fc957277cd68a55d7a7bbd46d4c Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.270348 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 07:52:30 crc kubenswrapper[5017]: W0129 07:52:30.273022 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c81767_cb91_41ba_b305_3aaff087606c.slice/crio-2c495e5c97424276533b26d774d49c3165db8cb466cbb93d8f659bb7b8e2a210 WatchSource:0}: Error finding container 2c495e5c97424276533b26d774d49c3165db8cb466cbb93d8f659bb7b8e2a210: Status 404 returned error can't find the container with id 2c495e5c97424276533b26d774d49c3165db8cb466cbb93d8f659bb7b8e2a210 Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.429649 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.431451 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.433675 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4csh5" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.434261 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.434410 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.434751 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.470086 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.516764 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/366e61eb-22f5-44a6-905e-b6d5e6b926b0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.516814 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/366e61eb-22f5-44a6-905e-b6d5e6b926b0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.516856 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366e61eb-22f5-44a6-905e-b6d5e6b926b0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.516886 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgs6m\" (UniqueName: \"kubernetes.io/projected/366e61eb-22f5-44a6-905e-b6d5e6b926b0-kube-api-access-pgs6m\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.516915 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/366e61eb-22f5-44a6-905e-b6d5e6b926b0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.517020 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366e61eb-22f5-44a6-905e-b6d5e6b926b0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.517057 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-86ce9415-0ab8-4273-8bb5-ade30cc5c0b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86ce9415-0ab8-4273-8bb5-ade30cc5c0b0\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.517083 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/366e61eb-22f5-44a6-905e-b6d5e6b926b0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.618825 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/366e61eb-22f5-44a6-905e-b6d5e6b926b0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.618894 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/366e61eb-22f5-44a6-905e-b6d5e6b926b0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.618949 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366e61eb-22f5-44a6-905e-b6d5e6b926b0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.619025 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgs6m\" (UniqueName: \"kubernetes.io/projected/366e61eb-22f5-44a6-905e-b6d5e6b926b0-kube-api-access-pgs6m\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.619068 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/366e61eb-22f5-44a6-905e-b6d5e6b926b0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.619139 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366e61eb-22f5-44a6-905e-b6d5e6b926b0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.619348 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-86ce9415-0ab8-4273-8bb5-ade30cc5c0b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86ce9415-0ab8-4273-8bb5-ade30cc5c0b0\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.619624 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/366e61eb-22f5-44a6-905e-b6d5e6b926b0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.620366 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/366e61eb-22f5-44a6-905e-b6d5e6b926b0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.620871 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/366e61eb-22f5-44a6-905e-b6d5e6b926b0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.621096 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/366e61eb-22f5-44a6-905e-b6d5e6b926b0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.621465 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/366e61eb-22f5-44a6-905e-b6d5e6b926b0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.623463 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.623897 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-86ce9415-0ab8-4273-8bb5-ade30cc5c0b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86ce9415-0ab8-4273-8bb5-ade30cc5c0b0\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7536aa12a131f34e3ad79feb9d253c6b7a0ca750b49e98b118c90da3abfbd45a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.625442 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366e61eb-22f5-44a6-905e-b6d5e6b926b0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.626439 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/366e61eb-22f5-44a6-905e-b6d5e6b926b0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.640001 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgs6m\" (UniqueName: \"kubernetes.io/projected/366e61eb-22f5-44a6-905e-b6d5e6b926b0-kube-api-access-pgs6m\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.656568 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-86ce9415-0ab8-4273-8bb5-ade30cc5c0b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86ce9415-0ab8-4273-8bb5-ade30cc5c0b0\") pod \"openstack-cell1-galera-0\" (UID: \"366e61eb-22f5-44a6-905e-b6d5e6b926b0\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.755273 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.827402 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa66ed9c-1190-4d8f-9026-e2f02e13aef5","Type":"ContainerStarted","Data":"d4ca116e17dc3f1a4f3319c399d44701efdaa6a70d528e41912d8f139482e899"} Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.829774 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" event={"ID":"ccf5cd60-96cd-454d-adac-537071b36e03","Type":"ContainerStarted","Data":"c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b"} Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.830156 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.832346 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01c81767-cb91-41ba-b305-3aaff087606c","Type":"ContainerStarted","Data":"452d2c0e2b846fca8dd78cef503d2e0ac63819eedff9623928e13dabfa71a608"} Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.832394 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01c81767-cb91-41ba-b305-3aaff087606c","Type":"ContainerStarted","Data":"2c495e5c97424276533b26d774d49c3165db8cb466cbb93d8f659bb7b8e2a210"} Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.834850 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" event={"ID":"ef6c1386-cf00-47bc-844a-9c2a52050ae4","Type":"ContainerStarted","Data":"026499f109eda5953bedcbd62c95223b31a41b3c299e1043876b87002b4ccf4c"} Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.835585 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.836785 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86dda784-351d-4bef-8daa-893cbc405934","Type":"ContainerStarted","Data":"da36a84db2a416af818227c315a2dc505b15e133c6458a0f864358e366b8ab95"} Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.839032 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"13a33bc7-e8c6-4b03-820c-33912797c525","Type":"ContainerStarted","Data":"1f08fd66d4a0b4e1e635a7d3a69373b41086829ec5f1e870835ec58d47407995"} Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.839064 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"13a33bc7-e8c6-4b03-820c-33912797c525","Type":"ContainerStarted","Data":"edddc8e1431aca5142c61bbbddea0e0942ae0fc957277cd68a55d7a7bbd46d4c"} Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.839587 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.861477 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" podStartSLOduration=3.861452502 podStartE2EDuration="3.861452502s" podCreationTimestamp="2026-01-29 07:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:52:30.856802787 +0000 UTC m=+4637.231250407" watchObservedRunningTime="2026-01-29 07:52:30.861452502 +0000 UTC m=+4637.235900122" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.902593 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.902567903 podStartE2EDuration="1.902567903s" podCreationTimestamp="2026-01-29 07:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:52:30.897262312 +0000 UTC m=+4637.271709922" watchObservedRunningTime="2026-01-29 07:52:30.902567903 +0000 UTC m=+4637.277015513" Jan 29 07:52:30 crc kubenswrapper[5017]: I0129 07:52:30.927619 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" podStartSLOduration=2.927586878 podStartE2EDuration="2.927586878s" podCreationTimestamp="2026-01-29 07:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:52:30.921049327 +0000 UTC m=+4637.295496937" watchObservedRunningTime="2026-01-29 07:52:30.927586878 +0000 UTC m=+4637.302034488" Jan 29 07:52:31 crc kubenswrapper[5017]: I0129 07:52:31.245253 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 07:52:31 crc kubenswrapper[5017]: I0129 07:52:31.850509 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"366e61eb-22f5-44a6-905e-b6d5e6b926b0","Type":"ContainerStarted","Data":"7273d8d6f590c673646f91762c013dfccb20f973cd07f16d223fd4ebae45a036"} Jan 29 07:52:31 crc kubenswrapper[5017]: I0129 07:52:31.850884 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"366e61eb-22f5-44a6-905e-b6d5e6b926b0","Type":"ContainerStarted","Data":"70c9188d7e3135c2dc4dc4e210e0af228536b608aebedc6597d09486b41db921"} Jan 29 07:52:31 crc kubenswrapper[5017]: I0129 07:52:31.852789 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86dda784-351d-4bef-8daa-893cbc405934","Type":"ContainerStarted","Data":"226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436"} Jan 29 07:52:31 crc kubenswrapper[5017]: I0129 07:52:31.854743 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa66ed9c-1190-4d8f-9026-e2f02e13aef5","Type":"ContainerStarted","Data":"38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835"} Jan 29 07:52:34 crc kubenswrapper[5017]: I0129 07:52:34.882736 5017 generic.go:334] "Generic (PLEG): container finished" podID="01c81767-cb91-41ba-b305-3aaff087606c" containerID="452d2c0e2b846fca8dd78cef503d2e0ac63819eedff9623928e13dabfa71a608" exitCode=0 Jan 29 07:52:34 crc kubenswrapper[5017]: I0129 07:52:34.882851 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01c81767-cb91-41ba-b305-3aaff087606c","Type":"ContainerDied","Data":"452d2c0e2b846fca8dd78cef503d2e0ac63819eedff9623928e13dabfa71a608"} Jan 29 07:52:35 crc kubenswrapper[5017]: I0129 07:52:35.896558 5017 generic.go:334] "Generic (PLEG): container finished" podID="366e61eb-22f5-44a6-905e-b6d5e6b926b0" containerID="7273d8d6f590c673646f91762c013dfccb20f973cd07f16d223fd4ebae45a036" exitCode=0 Jan 29 07:52:35 crc kubenswrapper[5017]: I0129 07:52:35.896652 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"366e61eb-22f5-44a6-905e-b6d5e6b926b0","Type":"ContainerDied","Data":"7273d8d6f590c673646f91762c013dfccb20f973cd07f16d223fd4ebae45a036"} Jan 29 07:52:35 crc kubenswrapper[5017]: I0129 07:52:35.899539 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01c81767-cb91-41ba-b305-3aaff087606c","Type":"ContainerStarted","Data":"bfbaa67c4705473c8b73ec5e3828c65b2e57fa2e4dd41036e04f1524432dbcfa"} Jan 29 07:52:35 crc kubenswrapper[5017]: I0129 07:52:35.963252 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.963228961 podStartE2EDuration="7.963228961s" podCreationTimestamp="2026-01-29 07:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:52:35.952917087 +0000 UTC m=+4642.327364697" watchObservedRunningTime="2026-01-29 07:52:35.963228961 +0000 UTC m=+4642.337676571" Jan 29 07:52:36 crc kubenswrapper[5017]: I0129 07:52:36.915082 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"366e61eb-22f5-44a6-905e-b6d5e6b926b0","Type":"ContainerStarted","Data":"ea360cb411c443d5caad918d06aacae725ec1dcb6df1aefddaf683b7e89ff323"} Jan 29 07:52:36 crc kubenswrapper[5017]: I0129 07:52:36.938895 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.938857307 podStartE2EDuration="7.938857307s" podCreationTimestamp="2026-01-29 07:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:52:36.936507709 +0000 UTC m=+4643.310955319" watchObservedRunningTime="2026-01-29 07:52:36.938857307 +0000 UTC m=+4643.313304917" Jan 29 07:52:38 crc kubenswrapper[5017]: I0129 07:52:38.125080 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:38 crc kubenswrapper[5017]: I0129 07:52:38.443222 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:52:38 crc kubenswrapper[5017]: I0129 07:52:38.516059 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-mrdlm"] Jan 29 07:52:38 crc kubenswrapper[5017]: I0129 07:52:38.930000 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" podUID="ccf5cd60-96cd-454d-adac-537071b36e03" containerName="dnsmasq-dns" containerID="cri-o://c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b" gracePeriod=10 Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.456942 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.583387 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-config\") pod \"ccf5cd60-96cd-454d-adac-537071b36e03\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.583552 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-dns-svc\") pod \"ccf5cd60-96cd-454d-adac-537071b36e03\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.583583 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8hlj\" (UniqueName: \"kubernetes.io/projected/ccf5cd60-96cd-454d-adac-537071b36e03-kube-api-access-k8hlj\") pod \"ccf5cd60-96cd-454d-adac-537071b36e03\" (UID: \"ccf5cd60-96cd-454d-adac-537071b36e03\") " Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.596228 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.602195 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf5cd60-96cd-454d-adac-537071b36e03-kube-api-access-k8hlj" (OuterVolumeSpecName: "kube-api-access-k8hlj") pod "ccf5cd60-96cd-454d-adac-537071b36e03" (UID: "ccf5cd60-96cd-454d-adac-537071b36e03"). InnerVolumeSpecName "kube-api-access-k8hlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.633691 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-config" (OuterVolumeSpecName: "config") pod "ccf5cd60-96cd-454d-adac-537071b36e03" (UID: "ccf5cd60-96cd-454d-adac-537071b36e03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.650567 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ccf5cd60-96cd-454d-adac-537071b36e03" (UID: "ccf5cd60-96cd-454d-adac-537071b36e03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.685757 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.685797 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccf5cd60-96cd-454d-adac-537071b36e03-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.685808 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8hlj\" (UniqueName: \"kubernetes.io/projected/ccf5cd60-96cd-454d-adac-537071b36e03-kube-api-access-k8hlj\") on node \"crc\" DevicePath \"\"" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.946865 5017 generic.go:334] "Generic (PLEG): container finished" podID="ccf5cd60-96cd-454d-adac-537071b36e03" containerID="c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b" exitCode=0 Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.946952 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" event={"ID":"ccf5cd60-96cd-454d-adac-537071b36e03","Type":"ContainerDied","Data":"c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b"} Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.947008 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" event={"ID":"ccf5cd60-96cd-454d-adac-537071b36e03","Type":"ContainerDied","Data":"daeb4e2ff8b0ab1d08bd1807a276c75df7f27339ea7ab42f7286bf6fcb6404bf"} Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.947049 5017 scope.go:117] "RemoveContainer" containerID="c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.947290 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-mrdlm" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.978196 5017 scope.go:117] "RemoveContainer" containerID="37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.982233 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.982292 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.991946 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-mrdlm"] Jan 29 07:52:39 crc kubenswrapper[5017]: I0129 07:52:39.998415 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-mrdlm"] Jan 29 07:52:40 crc kubenswrapper[5017]: I0129 07:52:40.006456 5017 scope.go:117] "RemoveContainer" containerID="c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b" Jan 29 07:52:40 crc kubenswrapper[5017]: E0129 07:52:40.007094 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b\": container with ID starting with c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b not found: ID does not exist" containerID="c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b" Jan 29 07:52:40 crc kubenswrapper[5017]: I0129 07:52:40.007159 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b"} err="failed to get container status \"c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b\": rpc error: code = NotFound desc = could not find container \"c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b\": container with ID starting with c572b145d05b2596233ef0fc0dce4ca3b5f495dbd341eb32b56b8b66c21e375b not found: ID does not exist" Jan 29 07:52:40 crc kubenswrapper[5017]: I0129 07:52:40.007201 5017 scope.go:117] "RemoveContainer" containerID="37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb" Jan 29 07:52:40 crc kubenswrapper[5017]: E0129 07:52:40.007630 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb\": container with ID starting with 37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb not found: ID does not exist" containerID="37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb" Jan 29 07:52:40 crc kubenswrapper[5017]: I0129 07:52:40.007681 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb"} err="failed to get container status \"37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb\": rpc error: code = NotFound desc = could not find container \"37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb\": container with ID starting with 37c75eb1ec2a14f11a45da5429ce35498cd8148393cc6fca8029ba255dfb76cb not found: ID does not exist" Jan 29 07:52:40 crc kubenswrapper[5017]: I0129 07:52:40.060624 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 07:52:40 crc kubenswrapper[5017]: I0129 07:52:40.325622 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccf5cd60-96cd-454d-adac-537071b36e03" path="/var/lib/kubelet/pods/ccf5cd60-96cd-454d-adac-537071b36e03/volumes" Jan 29 07:52:40 crc kubenswrapper[5017]: I0129 07:52:40.755646 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:40 crc kubenswrapper[5017]: I0129 07:52:40.755698 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:41 crc kubenswrapper[5017]: I0129 07:52:41.055278 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 07:52:43 crc kubenswrapper[5017]: I0129 07:52:43.341136 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:43 crc kubenswrapper[5017]: I0129 07:52:43.426918 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.587456 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dljmf"] Jan 29 07:52:48 crc kubenswrapper[5017]: E0129 07:52:48.588813 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf5cd60-96cd-454d-adac-537071b36e03" containerName="init" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.588835 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf5cd60-96cd-454d-adac-537071b36e03" containerName="init" Jan 29 07:52:48 crc kubenswrapper[5017]: E0129 07:52:48.588872 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf5cd60-96cd-454d-adac-537071b36e03" containerName="dnsmasq-dns" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.588880 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf5cd60-96cd-454d-adac-537071b36e03" containerName="dnsmasq-dns" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.589152 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf5cd60-96cd-454d-adac-537071b36e03" containerName="dnsmasq-dns" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.589898 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.593678 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.604882 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dljmf"] Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.745523 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nv8l\" (UniqueName: \"kubernetes.io/projected/c91478e0-d8ad-4fb0-8674-05b574bb1f36-kube-api-access-4nv8l\") pod \"root-account-create-update-dljmf\" (UID: \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\") " pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.746694 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91478e0-d8ad-4fb0-8674-05b574bb1f36-operator-scripts\") pod \"root-account-create-update-dljmf\" (UID: \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\") " pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.848480 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91478e0-d8ad-4fb0-8674-05b574bb1f36-operator-scripts\") pod \"root-account-create-update-dljmf\" (UID: \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\") " pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.848620 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nv8l\" (UniqueName: \"kubernetes.io/projected/c91478e0-d8ad-4fb0-8674-05b574bb1f36-kube-api-access-4nv8l\") pod \"root-account-create-update-dljmf\" (UID: \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\") " pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.849421 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91478e0-d8ad-4fb0-8674-05b574bb1f36-operator-scripts\") pod \"root-account-create-update-dljmf\" (UID: \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\") " pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.873143 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nv8l\" (UniqueName: \"kubernetes.io/projected/c91478e0-d8ad-4fb0-8674-05b574bb1f36-kube-api-access-4nv8l\") pod \"root-account-create-update-dljmf\" (UID: \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\") " pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:48 crc kubenswrapper[5017]: I0129 07:52:48.917438 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:49 crc kubenswrapper[5017]: I0129 07:52:49.347793 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dljmf"] Jan 29 07:52:49 crc kubenswrapper[5017]: W0129 07:52:49.353237 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc91478e0_d8ad_4fb0_8674_05b574bb1f36.slice/crio-f86969e4d2eb9802edfc2039ed253eeb1844316013f314aef94ef54426e76378 WatchSource:0}: Error finding container f86969e4d2eb9802edfc2039ed253eeb1844316013f314aef94ef54426e76378: Status 404 returned error can't find the container with id f86969e4d2eb9802edfc2039ed253eeb1844316013f314aef94ef54426e76378 Jan 29 07:52:50 crc kubenswrapper[5017]: I0129 07:52:50.043786 5017 generic.go:334] "Generic (PLEG): container finished" podID="c91478e0-d8ad-4fb0-8674-05b574bb1f36" containerID="5e26068ad8aad47aa698e5039e45501c78d8c2b17f4731196b324045eef3432c" exitCode=0 Jan 29 07:52:50 crc kubenswrapper[5017]: I0129 07:52:50.043841 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dljmf" event={"ID":"c91478e0-d8ad-4fb0-8674-05b574bb1f36","Type":"ContainerDied","Data":"5e26068ad8aad47aa698e5039e45501c78d8c2b17f4731196b324045eef3432c"} Jan 29 07:52:50 crc kubenswrapper[5017]: I0129 07:52:50.043900 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dljmf" event={"ID":"c91478e0-d8ad-4fb0-8674-05b574bb1f36","Type":"ContainerStarted","Data":"f86969e4d2eb9802edfc2039ed253eeb1844316013f314aef94ef54426e76378"} Jan 29 07:52:51 crc kubenswrapper[5017]: I0129 07:52:51.390029 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:51 crc kubenswrapper[5017]: I0129 07:52:51.496345 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91478e0-d8ad-4fb0-8674-05b574bb1f36-operator-scripts\") pod \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\" (UID: \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\") " Jan 29 07:52:51 crc kubenswrapper[5017]: I0129 07:52:51.496503 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nv8l\" (UniqueName: \"kubernetes.io/projected/c91478e0-d8ad-4fb0-8674-05b574bb1f36-kube-api-access-4nv8l\") pod \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\" (UID: \"c91478e0-d8ad-4fb0-8674-05b574bb1f36\") " Jan 29 07:52:51 crc kubenswrapper[5017]: I0129 07:52:51.498407 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c91478e0-d8ad-4fb0-8674-05b574bb1f36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c91478e0-d8ad-4fb0-8674-05b574bb1f36" (UID: "c91478e0-d8ad-4fb0-8674-05b574bb1f36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:52:51 crc kubenswrapper[5017]: I0129 07:52:51.502530 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c91478e0-d8ad-4fb0-8674-05b574bb1f36-kube-api-access-4nv8l" (OuterVolumeSpecName: "kube-api-access-4nv8l") pod "c91478e0-d8ad-4fb0-8674-05b574bb1f36" (UID: "c91478e0-d8ad-4fb0-8674-05b574bb1f36"). InnerVolumeSpecName "kube-api-access-4nv8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:52:51 crc kubenswrapper[5017]: I0129 07:52:51.599506 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nv8l\" (UniqueName: \"kubernetes.io/projected/c91478e0-d8ad-4fb0-8674-05b574bb1f36-kube-api-access-4nv8l\") on node \"crc\" DevicePath \"\"" Jan 29 07:52:51 crc kubenswrapper[5017]: I0129 07:52:51.599561 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91478e0-d8ad-4fb0-8674-05b574bb1f36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:52:52 crc kubenswrapper[5017]: I0129 07:52:52.065437 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dljmf" event={"ID":"c91478e0-d8ad-4fb0-8674-05b574bb1f36","Type":"ContainerDied","Data":"f86969e4d2eb9802edfc2039ed253eeb1844316013f314aef94ef54426e76378"} Jan 29 07:52:52 crc kubenswrapper[5017]: I0129 07:52:52.065990 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86969e4d2eb9802edfc2039ed253eeb1844316013f314aef94ef54426e76378" Jan 29 07:52:52 crc kubenswrapper[5017]: I0129 07:52:52.065609 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dljmf" Jan 29 07:52:54 crc kubenswrapper[5017]: I0129 07:52:54.416419 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dljmf"] Jan 29 07:52:54 crc kubenswrapper[5017]: I0129 07:52:54.422393 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dljmf"] Jan 29 07:52:56 crc kubenswrapper[5017]: I0129 07:52:56.329621 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c91478e0-d8ad-4fb0-8674-05b574bb1f36" path="/var/lib/kubelet/pods/c91478e0-d8ad-4fb0-8674-05b574bb1f36/volumes" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.438974 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sqg8w"] Jan 29 07:52:59 crc kubenswrapper[5017]: E0129 07:52:59.439839 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91478e0-d8ad-4fb0-8674-05b574bb1f36" containerName="mariadb-account-create-update" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.439859 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91478e0-d8ad-4fb0-8674-05b574bb1f36" containerName="mariadb-account-create-update" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.440062 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91478e0-d8ad-4fb0-8674-05b574bb1f36" containerName="mariadb-account-create-update" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.440843 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqg8w" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.443910 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.453501 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sqg8w"] Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.641904 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzsbm\" (UniqueName: \"kubernetes.io/projected/87bce803-e003-4ee6-8811-f8c968ed0f71-kube-api-access-kzsbm\") pod \"root-account-create-update-sqg8w\" (UID: \"87bce803-e003-4ee6-8811-f8c968ed0f71\") " pod="openstack/root-account-create-update-sqg8w" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.642574 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87bce803-e003-4ee6-8811-f8c968ed0f71-operator-scripts\") pod \"root-account-create-update-sqg8w\" (UID: \"87bce803-e003-4ee6-8811-f8c968ed0f71\") " pod="openstack/root-account-create-update-sqg8w" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.744121 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87bce803-e003-4ee6-8811-f8c968ed0f71-operator-scripts\") pod \"root-account-create-update-sqg8w\" (UID: \"87bce803-e003-4ee6-8811-f8c968ed0f71\") " pod="openstack/root-account-create-update-sqg8w" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.744294 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzsbm\" (UniqueName: \"kubernetes.io/projected/87bce803-e003-4ee6-8811-f8c968ed0f71-kube-api-access-kzsbm\") pod \"root-account-create-update-sqg8w\" (UID: \"87bce803-e003-4ee6-8811-f8c968ed0f71\") " pod="openstack/root-account-create-update-sqg8w" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.744909 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87bce803-e003-4ee6-8811-f8c968ed0f71-operator-scripts\") pod \"root-account-create-update-sqg8w\" (UID: \"87bce803-e003-4ee6-8811-f8c968ed0f71\") " pod="openstack/root-account-create-update-sqg8w" Jan 29 07:52:59 crc kubenswrapper[5017]: I0129 07:52:59.773012 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzsbm\" (UniqueName: \"kubernetes.io/projected/87bce803-e003-4ee6-8811-f8c968ed0f71-kube-api-access-kzsbm\") pod \"root-account-create-update-sqg8w\" (UID: \"87bce803-e003-4ee6-8811-f8c968ed0f71\") " pod="openstack/root-account-create-update-sqg8w" Jan 29 07:53:00 crc kubenswrapper[5017]: I0129 07:53:00.070272 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqg8w" Jan 29 07:53:00 crc kubenswrapper[5017]: I0129 07:53:00.540149 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sqg8w"] Jan 29 07:53:01 crc kubenswrapper[5017]: I0129 07:53:01.144412 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqg8w" event={"ID":"87bce803-e003-4ee6-8811-f8c968ed0f71","Type":"ContainerStarted","Data":"ad450d585b3b08adfdd4b7d64acb2163026fdbf2a02052da64e970fc0524b7fe"} Jan 29 07:53:01 crc kubenswrapper[5017]: I0129 07:53:01.144469 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqg8w" event={"ID":"87bce803-e003-4ee6-8811-f8c968ed0f71","Type":"ContainerStarted","Data":"c5d135da92b8756a5bd0e4d96823851921417fc8a0f89b60517d191001ae8c1b"} Jan 29 07:53:02 crc kubenswrapper[5017]: I0129 07:53:02.152293 5017 generic.go:334] "Generic (PLEG): container finished" podID="87bce803-e003-4ee6-8811-f8c968ed0f71" containerID="ad450d585b3b08adfdd4b7d64acb2163026fdbf2a02052da64e970fc0524b7fe" exitCode=0 Jan 29 07:53:02 crc kubenswrapper[5017]: I0129 07:53:02.152352 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqg8w" event={"ID":"87bce803-e003-4ee6-8811-f8c968ed0f71","Type":"ContainerDied","Data":"ad450d585b3b08adfdd4b7d64acb2163026fdbf2a02052da64e970fc0524b7fe"} Jan 29 07:53:03 crc kubenswrapper[5017]: I0129 07:53:03.453857 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqg8w" Jan 29 07:53:03 crc kubenswrapper[5017]: I0129 07:53:03.608718 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzsbm\" (UniqueName: \"kubernetes.io/projected/87bce803-e003-4ee6-8811-f8c968ed0f71-kube-api-access-kzsbm\") pod \"87bce803-e003-4ee6-8811-f8c968ed0f71\" (UID: \"87bce803-e003-4ee6-8811-f8c968ed0f71\") " Jan 29 07:53:03 crc kubenswrapper[5017]: I0129 07:53:03.608785 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87bce803-e003-4ee6-8811-f8c968ed0f71-operator-scripts\") pod \"87bce803-e003-4ee6-8811-f8c968ed0f71\" (UID: \"87bce803-e003-4ee6-8811-f8c968ed0f71\") " Jan 29 07:53:03 crc kubenswrapper[5017]: I0129 07:53:03.609982 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bce803-e003-4ee6-8811-f8c968ed0f71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87bce803-e003-4ee6-8811-f8c968ed0f71" (UID: "87bce803-e003-4ee6-8811-f8c968ed0f71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:53:03 crc kubenswrapper[5017]: I0129 07:53:03.616474 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bce803-e003-4ee6-8811-f8c968ed0f71-kube-api-access-kzsbm" (OuterVolumeSpecName: "kube-api-access-kzsbm") pod "87bce803-e003-4ee6-8811-f8c968ed0f71" (UID: "87bce803-e003-4ee6-8811-f8c968ed0f71"). InnerVolumeSpecName "kube-api-access-kzsbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:53:03 crc kubenswrapper[5017]: I0129 07:53:03.710664 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzsbm\" (UniqueName: \"kubernetes.io/projected/87bce803-e003-4ee6-8811-f8c968ed0f71-kube-api-access-kzsbm\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:03 crc kubenswrapper[5017]: I0129 07:53:03.710702 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87bce803-e003-4ee6-8811-f8c968ed0f71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:04 crc kubenswrapper[5017]: I0129 07:53:04.170777 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sqg8w" Jan 29 07:53:04 crc kubenswrapper[5017]: I0129 07:53:04.170815 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sqg8w" event={"ID":"87bce803-e003-4ee6-8811-f8c968ed0f71","Type":"ContainerDied","Data":"c5d135da92b8756a5bd0e4d96823851921417fc8a0f89b60517d191001ae8c1b"} Jan 29 07:53:04 crc kubenswrapper[5017]: I0129 07:53:04.170871 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d135da92b8756a5bd0e4d96823851921417fc8a0f89b60517d191001ae8c1b" Jan 29 07:53:04 crc kubenswrapper[5017]: I0129 07:53:04.181699 5017 generic.go:334] "Generic (PLEG): container finished" podID="86dda784-351d-4bef-8daa-893cbc405934" containerID="226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436" exitCode=0 Jan 29 07:53:04 crc kubenswrapper[5017]: I0129 07:53:04.182773 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86dda784-351d-4bef-8daa-893cbc405934","Type":"ContainerDied","Data":"226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436"} Jan 29 07:53:04 crc kubenswrapper[5017]: I0129 07:53:04.187350 5017 generic.go:334] "Generic (PLEG): container finished" podID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" containerID="38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835" exitCode=0 Jan 29 07:53:04 crc kubenswrapper[5017]: I0129 07:53:04.187449 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa66ed9c-1190-4d8f-9026-e2f02e13aef5","Type":"ContainerDied","Data":"38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835"} Jan 29 07:53:05 crc kubenswrapper[5017]: I0129 07:53:05.198079 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa66ed9c-1190-4d8f-9026-e2f02e13aef5","Type":"ContainerStarted","Data":"60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037"} Jan 29 07:53:05 crc kubenswrapper[5017]: I0129 07:53:05.199020 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:05 crc kubenswrapper[5017]: I0129 07:53:05.200837 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86dda784-351d-4bef-8daa-893cbc405934","Type":"ContainerStarted","Data":"305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28"} Jan 29 07:53:05 crc kubenswrapper[5017]: I0129 07:53:05.201123 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 07:53:05 crc kubenswrapper[5017]: I0129 07:53:05.230578 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.23054594 podStartE2EDuration="37.23054594s" podCreationTimestamp="2026-01-29 07:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:53:05.223891917 +0000 UTC m=+4671.598339537" watchObservedRunningTime="2026-01-29 07:53:05.23054594 +0000 UTC m=+4671.604993550" Jan 29 07:53:05 crc kubenswrapper[5017]: I0129 07:53:05.256111 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.256092869 podStartE2EDuration="38.256092869s" podCreationTimestamp="2026-01-29 07:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:53:05.25250382 +0000 UTC m=+4671.626951430" watchObservedRunningTime="2026-01-29 07:53:05.256092869 +0000 UTC m=+4671.630540479" Jan 29 07:53:19 crc kubenswrapper[5017]: I0129 07:53:19.554217 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 07:53:19 crc kubenswrapper[5017]: I0129 07:53:19.626262 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.784277 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-9w5ks"] Jan 29 07:53:22 crc kubenswrapper[5017]: E0129 07:53:22.785465 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bce803-e003-4ee6-8811-f8c968ed0f71" containerName="mariadb-account-create-update" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.785487 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bce803-e003-4ee6-8811-f8c968ed0f71" containerName="mariadb-account-create-update" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.785721 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bce803-e003-4ee6-8811-f8c968ed0f71" containerName="mariadb-account-create-update" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.786886 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.795351 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-9w5ks"] Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.850378 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-config\") pod \"dnsmasq-dns-699964fbc-9w5ks\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.850448 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-dns-svc\") pod \"dnsmasq-dns-699964fbc-9w5ks\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.850475 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgskj\" (UniqueName: \"kubernetes.io/projected/64aefc1a-6f20-40e9-a8f5-767c661de180-kube-api-access-mgskj\") pod \"dnsmasq-dns-699964fbc-9w5ks\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.952832 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-config\") pod \"dnsmasq-dns-699964fbc-9w5ks\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.952907 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-dns-svc\") pod \"dnsmasq-dns-699964fbc-9w5ks\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.952938 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgskj\" (UniqueName: \"kubernetes.io/projected/64aefc1a-6f20-40e9-a8f5-767c661de180-kube-api-access-mgskj\") pod \"dnsmasq-dns-699964fbc-9w5ks\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.953920 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-config\") pod \"dnsmasq-dns-699964fbc-9w5ks\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.954135 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-dns-svc\") pod \"dnsmasq-dns-699964fbc-9w5ks\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:22 crc kubenswrapper[5017]: I0129 07:53:22.974293 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgskj\" (UniqueName: \"kubernetes.io/projected/64aefc1a-6f20-40e9-a8f5-767c661de180-kube-api-access-mgskj\") pod \"dnsmasq-dns-699964fbc-9w5ks\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:23 crc kubenswrapper[5017]: I0129 07:53:23.107577 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:23 crc kubenswrapper[5017]: I0129 07:53:23.506394 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:53:23 crc kubenswrapper[5017]: I0129 07:53:23.592104 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-9w5ks"] Jan 29 07:53:24 crc kubenswrapper[5017]: I0129 07:53:24.344395 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:53:24 crc kubenswrapper[5017]: I0129 07:53:24.402492 5017 generic.go:334] "Generic (PLEG): container finished" podID="64aefc1a-6f20-40e9-a8f5-767c661de180" containerID="5eaa8c238ccbe631e068663bfaaeef60d64f1f98489ad3cd3418f60bdb63b7dc" exitCode=0 Jan 29 07:53:24 crc kubenswrapper[5017]: I0129 07:53:24.402544 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" event={"ID":"64aefc1a-6f20-40e9-a8f5-767c661de180","Type":"ContainerDied","Data":"5eaa8c238ccbe631e068663bfaaeef60d64f1f98489ad3cd3418f60bdb63b7dc"} Jan 29 07:53:24 crc kubenswrapper[5017]: I0129 07:53:24.402581 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" event={"ID":"64aefc1a-6f20-40e9-a8f5-767c661de180","Type":"ContainerStarted","Data":"3977abcc31659f36858cb980adebe150f60082f433bf2681ecfe4b9ac340584d"} Jan 29 07:53:25 crc kubenswrapper[5017]: I0129 07:53:25.412429 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" event={"ID":"64aefc1a-6f20-40e9-a8f5-767c661de180","Type":"ContainerStarted","Data":"940cbae81d1ddafadef91e19c788dfc730108cbb6da7a1f91ab4d6c6b502d6d1"} Jan 29 07:53:25 crc kubenswrapper[5017]: I0129 07:53:25.412591 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:25 crc kubenswrapper[5017]: I0129 07:53:25.435382 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" podStartSLOduration=3.435355004 podStartE2EDuration="3.435355004s" podCreationTimestamp="2026-01-29 07:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:53:25.428750801 +0000 UTC m=+4691.803198411" watchObservedRunningTime="2026-01-29 07:53:25.435355004 +0000 UTC m=+4691.809802614" Jan 29 07:53:25 crc kubenswrapper[5017]: I0129 07:53:25.521252 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="86dda784-351d-4bef-8daa-893cbc405934" containerName="rabbitmq" containerID="cri-o://305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28" gracePeriod=604798 Jan 29 07:53:26 crc kubenswrapper[5017]: I0129 07:53:26.263321 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" containerName="rabbitmq" containerID="cri-o://60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037" gracePeriod=604799 Jan 29 07:53:29 crc kubenswrapper[5017]: I0129 07:53:29.551793 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="86dda784-351d-4bef-8daa-893cbc405934" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.241:5672: connect: connection refused" Jan 29 07:53:29 crc kubenswrapper[5017]: I0129 07:53:29.623479 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.243:5672: connect: connection refused" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.101110 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.117647 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-confd\") pod \"86dda784-351d-4bef-8daa-893cbc405934\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.117786 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86dda784-351d-4bef-8daa-893cbc405934-erlang-cookie-secret\") pod \"86dda784-351d-4bef-8daa-893cbc405934\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.117854 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-plugins-conf\") pod \"86dda784-351d-4bef-8daa-893cbc405934\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.117935 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-server-conf\") pod \"86dda784-351d-4bef-8daa-893cbc405934\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.118048 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dcp2\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-kube-api-access-7dcp2\") pod \"86dda784-351d-4bef-8daa-893cbc405934\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.118089 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86dda784-351d-4bef-8daa-893cbc405934-pod-info\") pod \"86dda784-351d-4bef-8daa-893cbc405934\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.118158 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-erlang-cookie\") pod \"86dda784-351d-4bef-8daa-893cbc405934\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.118440 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") pod \"86dda784-351d-4bef-8daa-893cbc405934\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.118478 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-plugins\") pod \"86dda784-351d-4bef-8daa-893cbc405934\" (UID: \"86dda784-351d-4bef-8daa-893cbc405934\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.119701 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "86dda784-351d-4bef-8daa-893cbc405934" (UID: "86dda784-351d-4bef-8daa-893cbc405934"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.121257 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "86dda784-351d-4bef-8daa-893cbc405934" (UID: "86dda784-351d-4bef-8daa-893cbc405934"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.121259 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "86dda784-351d-4bef-8daa-893cbc405934" (UID: "86dda784-351d-4bef-8daa-893cbc405934"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.126560 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86dda784-351d-4bef-8daa-893cbc405934-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "86dda784-351d-4bef-8daa-893cbc405934" (UID: "86dda784-351d-4bef-8daa-893cbc405934"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.135486 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/86dda784-351d-4bef-8daa-893cbc405934-pod-info" (OuterVolumeSpecName: "pod-info") pod "86dda784-351d-4bef-8daa-893cbc405934" (UID: "86dda784-351d-4bef-8daa-893cbc405934"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.138294 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-kube-api-access-7dcp2" (OuterVolumeSpecName: "kube-api-access-7dcp2") pod "86dda784-351d-4bef-8daa-893cbc405934" (UID: "86dda784-351d-4bef-8daa-893cbc405934"). InnerVolumeSpecName "kube-api-access-7dcp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.148931 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4" (OuterVolumeSpecName: "persistence") pod "86dda784-351d-4bef-8daa-893cbc405934" (UID: "86dda784-351d-4bef-8daa-893cbc405934"). InnerVolumeSpecName "pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.165749 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-server-conf" (OuterVolumeSpecName: "server-conf") pod "86dda784-351d-4bef-8daa-893cbc405934" (UID: "86dda784-351d-4bef-8daa-893cbc405934"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.220778 5017 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86dda784-351d-4bef-8daa-893cbc405934-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.220812 5017 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.220827 5017 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86dda784-351d-4bef-8daa-893cbc405934-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.220837 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dcp2\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-kube-api-access-7dcp2\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.220846 5017 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86dda784-351d-4bef-8daa-893cbc405934-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.220856 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.220891 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") on node \"crc\" " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.220902 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.235568 5017 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.235817 5017 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4") on node "crc" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.238092 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "86dda784-351d-4bef-8daa-893cbc405934" (UID: "86dda784-351d-4bef-8daa-893cbc405934"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.322506 5017 reconciler_common.go:293] "Volume detached for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.322557 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86dda784-351d-4bef-8daa-893cbc405934-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.482777 5017 generic.go:334] "Generic (PLEG): container finished" podID="86dda784-351d-4bef-8daa-893cbc405934" containerID="305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28" exitCode=0 Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.482837 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86dda784-351d-4bef-8daa-893cbc405934","Type":"ContainerDied","Data":"305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28"} Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.482867 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.482911 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86dda784-351d-4bef-8daa-893cbc405934","Type":"ContainerDied","Data":"da36a84db2a416af818227c315a2dc505b15e133c6458a0f864358e366b8ab95"} Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.482937 5017 scope.go:117] "RemoveContainer" containerID="305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.530979 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.547967 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.564379 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:53:32 crc kubenswrapper[5017]: E0129 07:53:32.564889 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86dda784-351d-4bef-8daa-893cbc405934" containerName="rabbitmq" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.564913 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="86dda784-351d-4bef-8daa-893cbc405934" containerName="rabbitmq" Jan 29 07:53:32 crc kubenswrapper[5017]: E0129 07:53:32.564947 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86dda784-351d-4bef-8daa-893cbc405934" containerName="setup-container" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.564956 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="86dda784-351d-4bef-8daa-893cbc405934" containerName="setup-container" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.565142 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="86dda784-351d-4bef-8daa-893cbc405934" containerName="rabbitmq" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.566371 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.569393 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.569654 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q6trs" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.569787 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.570196 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.570393 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.576106 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.603460 5017 scope.go:117] "RemoveContainer" containerID="226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.627579 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aba4a07e-d542-4c62-b2bf-414140c4715f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.627637 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.627670 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aba4a07e-d542-4c62-b2bf-414140c4715f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.627700 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aba4a07e-d542-4c62-b2bf-414140c4715f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.627722 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aba4a07e-d542-4c62-b2bf-414140c4715f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.627746 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aba4a07e-d542-4c62-b2bf-414140c4715f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.627783 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aba4a07e-d542-4c62-b2bf-414140c4715f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.627810 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aba4a07e-d542-4c62-b2bf-414140c4715f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.627846 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx9gf\" (UniqueName: \"kubernetes.io/projected/aba4a07e-d542-4c62-b2bf-414140c4715f-kube-api-access-tx9gf\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.628453 5017 scope.go:117] "RemoveContainer" containerID="305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28" Jan 29 07:53:32 crc kubenswrapper[5017]: E0129 07:53:32.628827 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28\": container with ID starting with 305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28 not found: ID does not exist" containerID="305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.628865 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28"} err="failed to get container status \"305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28\": rpc error: code = NotFound desc = could not find container \"305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28\": container with ID starting with 305ff93e615b0c36d1c9ca1f543a493796cd9a60eca09c42d652f22ded7dde28 not found: ID does not exist" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.628898 5017 scope.go:117] "RemoveContainer" containerID="226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436" Jan 29 07:53:32 crc kubenswrapper[5017]: E0129 07:53:32.629174 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436\": container with ID starting with 226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436 not found: ID does not exist" containerID="226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.629201 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436"} err="failed to get container status \"226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436\": rpc error: code = NotFound desc = could not find container \"226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436\": container with ID starting with 226489a6f4e09538843d89b1cb09c4ba81aed9e5f9b400b153e4a059c2bff436 not found: ID does not exist" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.735490 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx9gf\" (UniqueName: \"kubernetes.io/projected/aba4a07e-d542-4c62-b2bf-414140c4715f-kube-api-access-tx9gf\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.735599 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aba4a07e-d542-4c62-b2bf-414140c4715f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.735644 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.735688 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aba4a07e-d542-4c62-b2bf-414140c4715f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.735731 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aba4a07e-d542-4c62-b2bf-414140c4715f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.735761 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aba4a07e-d542-4c62-b2bf-414140c4715f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.735796 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aba4a07e-d542-4c62-b2bf-414140c4715f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.735842 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aba4a07e-d542-4c62-b2bf-414140c4715f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.735878 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aba4a07e-d542-4c62-b2bf-414140c4715f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.737218 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aba4a07e-d542-4c62-b2bf-414140c4715f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.737887 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aba4a07e-d542-4c62-b2bf-414140c4715f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.739218 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aba4a07e-d542-4c62-b2bf-414140c4715f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.739886 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aba4a07e-d542-4c62-b2bf-414140c4715f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.741487 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.741525 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/22faddce87d0a1a5182aed12ec909295840853ffaa273b9024b5dc87691e16ec/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.743452 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aba4a07e-d542-4c62-b2bf-414140c4715f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.745074 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aba4a07e-d542-4c62-b2bf-414140c4715f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.746681 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aba4a07e-d542-4c62-b2bf-414140c4715f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.760593 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx9gf\" (UniqueName: \"kubernetes.io/projected/aba4a07e-d542-4c62-b2bf-414140c4715f-kube-api-access-tx9gf\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.779504 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d7fa02f-e6dd-4f2a-89f1-9a26e73bbcb4\") pod \"rabbitmq-server-0\" (UID: \"aba4a07e-d542-4c62-b2bf-414140c4715f\") " pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.861021 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.895103 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.939122 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") pod \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.939248 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-confd\") pod \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.939297 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-erlang-cookie\") pod \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.939365 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-plugins\") pod \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.939435 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-server-conf\") pod \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.939507 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-plugins-conf\") pod \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.939593 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-pod-info\") pod \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.939629 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-erlang-cookie-secret\") pod \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.939699 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkb6t\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-kube-api-access-zkb6t\") pod \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\" (UID: \"fa66ed9c-1190-4d8f-9026-e2f02e13aef5\") " Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.940991 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fa66ed9c-1190-4d8f-9026-e2f02e13aef5" (UID: "fa66ed9c-1190-4d8f-9026-e2f02e13aef5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.941225 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fa66ed9c-1190-4d8f-9026-e2f02e13aef5" (UID: "fa66ed9c-1190-4d8f-9026-e2f02e13aef5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.942491 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fa66ed9c-1190-4d8f-9026-e2f02e13aef5" (UID: "fa66ed9c-1190-4d8f-9026-e2f02e13aef5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.950353 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fa66ed9c-1190-4d8f-9026-e2f02e13aef5" (UID: "fa66ed9c-1190-4d8f-9026-e2f02e13aef5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.952755 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-pod-info" (OuterVolumeSpecName: "pod-info") pod "fa66ed9c-1190-4d8f-9026-e2f02e13aef5" (UID: "fa66ed9c-1190-4d8f-9026-e2f02e13aef5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.962295 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-kube-api-access-zkb6t" (OuterVolumeSpecName: "kube-api-access-zkb6t") pod "fa66ed9c-1190-4d8f-9026-e2f02e13aef5" (UID: "fa66ed9c-1190-4d8f-9026-e2f02e13aef5"). InnerVolumeSpecName "kube-api-access-zkb6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.965635 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-server-conf" (OuterVolumeSpecName: "server-conf") pod "fa66ed9c-1190-4d8f-9026-e2f02e13aef5" (UID: "fa66ed9c-1190-4d8f-9026-e2f02e13aef5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:53:32 crc kubenswrapper[5017]: I0129 07:53:32.968211 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf" (OuterVolumeSpecName: "persistence") pod "fa66ed9c-1190-4d8f-9026-e2f02e13aef5" (UID: "fa66ed9c-1190-4d8f-9026-e2f02e13aef5"). InnerVolumeSpecName "pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.042029 5017 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.042662 5017 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.042683 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkb6t\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-kube-api-access-zkb6t\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.042740 5017 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") on node \"crc\" " Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.042757 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.042771 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.042783 5017 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.042795 5017 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.064302 5017 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.064564 5017 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf") on node "crc" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.071846 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fa66ed9c-1190-4d8f-9026-e2f02e13aef5" (UID: "fa66ed9c-1190-4d8f-9026-e2f02e13aef5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.109199 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.144202 5017 reconciler_common.go:293] "Volume detached for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.144252 5017 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa66ed9c-1190-4d8f-9026-e2f02e13aef5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.162889 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-g8qf5"] Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.163285 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" podUID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" containerName="dnsmasq-dns" containerID="cri-o://026499f109eda5953bedcbd62c95223b31a41b3c299e1043876b87002b4ccf4c" gracePeriod=10 Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.409477 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.514736 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aba4a07e-d542-4c62-b2bf-414140c4715f","Type":"ContainerStarted","Data":"69f9bae219869b299f64123090e15bbf14875d247a2c036b6570ff1e5e245126"} Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.519134 5017 generic.go:334] "Generic (PLEG): container finished" podID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" containerID="60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037" exitCode=0 Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.519261 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.520347 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa66ed9c-1190-4d8f-9026-e2f02e13aef5","Type":"ContainerDied","Data":"60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037"} Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.520415 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa66ed9c-1190-4d8f-9026-e2f02e13aef5","Type":"ContainerDied","Data":"d4ca116e17dc3f1a4f3319c399d44701efdaa6a70d528e41912d8f139482e899"} Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.520458 5017 scope.go:117] "RemoveContainer" containerID="60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.535627 5017 generic.go:334] "Generic (PLEG): container finished" podID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" containerID="026499f109eda5953bedcbd62c95223b31a41b3c299e1043876b87002b4ccf4c" exitCode=0 Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.535767 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" event={"ID":"ef6c1386-cf00-47bc-844a-9c2a52050ae4","Type":"ContainerDied","Data":"026499f109eda5953bedcbd62c95223b31a41b3c299e1043876b87002b4ccf4c"} Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.555530 5017 scope.go:117] "RemoveContainer" containerID="38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.573758 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.585237 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.599304 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:53:33 crc kubenswrapper[5017]: E0129 07:53:33.599823 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" containerName="rabbitmq" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.599844 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" containerName="rabbitmq" Jan 29 07:53:33 crc kubenswrapper[5017]: E0129 07:53:33.599874 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" containerName="setup-container" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.599883 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" containerName="setup-container" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.600084 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" containerName="rabbitmq" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.601606 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.603522 5017 scope.go:117] "RemoveContainer" containerID="60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.604695 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.605010 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8k2k9" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.605158 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.605510 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.605724 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 07:53:33 crc kubenswrapper[5017]: E0129 07:53:33.607708 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037\": container with ID starting with 60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037 not found: ID does not exist" containerID="60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.607754 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037"} err="failed to get container status \"60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037\": rpc error: code = NotFound desc = could not find container \"60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037\": container with ID starting with 60d0f3ccd73811b756e5aa498c07d0be0a9caa3b01799539ba2bfc6560b22037 not found: ID does not exist" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.607787 5017 scope.go:117] "RemoveContainer" containerID="38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835" Jan 29 07:53:33 crc kubenswrapper[5017]: E0129 07:53:33.609047 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835\": container with ID starting with 38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835 not found: ID does not exist" containerID="38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.609077 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835"} err="failed to get container status \"38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835\": rpc error: code = NotFound desc = could not find container \"38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835\": container with ID starting with 38cfcc765209098ab93b43b9983a2f35430b3765320491b412e0100362222835 not found: ID does not exist" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.612862 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.636430 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.756015 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-config\") pod \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.756157 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7pzr\" (UniqueName: \"kubernetes.io/projected/ef6c1386-cf00-47bc-844a-9c2a52050ae4-kube-api-access-q7pzr\") pod \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.756192 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-dns-svc\") pod \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\" (UID: \"ef6c1386-cf00-47bc-844a-9c2a52050ae4\") " Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.756515 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2105bda0-b02e-49b2-9024-5ae1c94a9753-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.756547 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lctr\" (UniqueName: \"kubernetes.io/projected/2105bda0-b02e-49b2-9024-5ae1c94a9753-kube-api-access-5lctr\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.756630 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2105bda0-b02e-49b2-9024-5ae1c94a9753-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.756687 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2105bda0-b02e-49b2-9024-5ae1c94a9753-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.756752 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2105bda0-b02e-49b2-9024-5ae1c94a9753-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.756774 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2105bda0-b02e-49b2-9024-5ae1c94a9753-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.757001 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2105bda0-b02e-49b2-9024-5ae1c94a9753-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.757340 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2105bda0-b02e-49b2-9024-5ae1c94a9753-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.757534 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.763592 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6c1386-cf00-47bc-844a-9c2a52050ae4-kube-api-access-q7pzr" (OuterVolumeSpecName: "kube-api-access-q7pzr") pod "ef6c1386-cf00-47bc-844a-9c2a52050ae4" (UID: "ef6c1386-cf00-47bc-844a-9c2a52050ae4"). InnerVolumeSpecName "kube-api-access-q7pzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.796337 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-config" (OuterVolumeSpecName: "config") pod "ef6c1386-cf00-47bc-844a-9c2a52050ae4" (UID: "ef6c1386-cf00-47bc-844a-9c2a52050ae4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.799003 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef6c1386-cf00-47bc-844a-9c2a52050ae4" (UID: "ef6c1386-cf00-47bc-844a-9c2a52050ae4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859160 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2105bda0-b02e-49b2-9024-5ae1c94a9753-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859232 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lctr\" (UniqueName: \"kubernetes.io/projected/2105bda0-b02e-49b2-9024-5ae1c94a9753-kube-api-access-5lctr\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859269 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2105bda0-b02e-49b2-9024-5ae1c94a9753-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859315 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2105bda0-b02e-49b2-9024-5ae1c94a9753-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859347 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2105bda0-b02e-49b2-9024-5ae1c94a9753-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859367 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2105bda0-b02e-49b2-9024-5ae1c94a9753-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859423 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2105bda0-b02e-49b2-9024-5ae1c94a9753-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859468 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2105bda0-b02e-49b2-9024-5ae1c94a9753-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859506 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859567 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859580 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7pzr\" (UniqueName: \"kubernetes.io/projected/ef6c1386-cf00-47bc-844a-9c2a52050ae4-kube-api-access-q7pzr\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.859590 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6c1386-cf00-47bc-844a-9c2a52050ae4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.860402 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2105bda0-b02e-49b2-9024-5ae1c94a9753-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.860651 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2105bda0-b02e-49b2-9024-5ae1c94a9753-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.861288 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2105bda0-b02e-49b2-9024-5ae1c94a9753-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.861990 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2105bda0-b02e-49b2-9024-5ae1c94a9753-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.863976 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.864025 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2105bda0-b02e-49b2-9024-5ae1c94a9753-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.864031 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/76741904e1bce448ae9808ed33d8ddf821ea45d735ad33493dbbf09d8599ddb2/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.865035 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2105bda0-b02e-49b2-9024-5ae1c94a9753-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.866862 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2105bda0-b02e-49b2-9024-5ae1c94a9753-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.880702 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lctr\" (UniqueName: \"kubernetes.io/projected/2105bda0-b02e-49b2-9024-5ae1c94a9753-kube-api-access-5lctr\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.893539 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-293dbb3a-e820-4e3b-bcfc-a6ac2f98aacf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2105bda0-b02e-49b2-9024-5ae1c94a9753\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:33 crc kubenswrapper[5017]: I0129 07:53:33.928367 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.155434 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:53:34 crc kubenswrapper[5017]: W0129 07:53:34.158023 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2105bda0_b02e_49b2_9024_5ae1c94a9753.slice/crio-a1e240eb4d1404f6a660b45b0797b68ecb3e0bcb84fe7cbd3916b035d6e85cad WatchSource:0}: Error finding container a1e240eb4d1404f6a660b45b0797b68ecb3e0bcb84fe7cbd3916b035d6e85cad: Status 404 returned error can't find the container with id a1e240eb4d1404f6a660b45b0797b68ecb3e0bcb84fe7cbd3916b035d6e85cad Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.329735 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86dda784-351d-4bef-8daa-893cbc405934" path="/var/lib/kubelet/pods/86dda784-351d-4bef-8daa-893cbc405934/volumes" Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.330625 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa66ed9c-1190-4d8f-9026-e2f02e13aef5" path="/var/lib/kubelet/pods/fa66ed9c-1190-4d8f-9026-e2f02e13aef5/volumes" Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.554768 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" event={"ID":"ef6c1386-cf00-47bc-844a-9c2a52050ae4","Type":"ContainerDied","Data":"6dcf71937717de556dc321fd90192c00885939d2ae61d120f28275baf041cc08"} Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.554830 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.554864 5017 scope.go:117] "RemoveContainer" containerID="026499f109eda5953bedcbd62c95223b31a41b3c299e1043876b87002b4ccf4c" Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.558727 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2105bda0-b02e-49b2-9024-5ae1c94a9753","Type":"ContainerStarted","Data":"a1e240eb4d1404f6a660b45b0797b68ecb3e0bcb84fe7cbd3916b035d6e85cad"} Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.597678 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-g8qf5"] Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.607054 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-g8qf5"] Jan 29 07:53:34 crc kubenswrapper[5017]: I0129 07:53:34.614830 5017 scope.go:117] "RemoveContainer" containerID="4504902d8f57a999218bba0f72aad4d1f776de06f9588ec37a43971bb5a297a0" Jan 29 07:53:35 crc kubenswrapper[5017]: I0129 07:53:35.573568 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aba4a07e-d542-4c62-b2bf-414140c4715f","Type":"ContainerStarted","Data":"593c0177c1e2d9728f5c61945a6155d29a67b3b581090dff93b77eabb9301d58"} Jan 29 07:53:36 crc kubenswrapper[5017]: I0129 07:53:36.332527 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" path="/var/lib/kubelet/pods/ef6c1386-cf00-47bc-844a-9c2a52050ae4/volumes" Jan 29 07:53:36 crc kubenswrapper[5017]: I0129 07:53:36.582998 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2105bda0-b02e-49b2-9024-5ae1c94a9753","Type":"ContainerStarted","Data":"f0854688b38e924aa4f55a8076d015cb75e4d5dbc5b369641f4bd2d6e45e3f1c"} Jan 29 07:53:38 crc kubenswrapper[5017]: I0129 07:53:38.442115 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d79f765b5-g8qf5" podUID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.240:5353: i/o timeout" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.477661 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w89r4"] Jan 29 07:53:55 crc kubenswrapper[5017]: E0129 07:53:55.478741 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" containerName="init" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.478760 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" containerName="init" Jan 29 07:53:55 crc kubenswrapper[5017]: E0129 07:53:55.478776 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" containerName="dnsmasq-dns" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.478783 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" containerName="dnsmasq-dns" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.479064 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6c1386-cf00-47bc-844a-9c2a52050ae4" containerName="dnsmasq-dns" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.480651 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.487336 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w89r4"] Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.609967 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-utilities\") pod \"redhat-operators-w89r4\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.610218 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-catalog-content\") pod \"redhat-operators-w89r4\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.610348 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbnzz\" (UniqueName: \"kubernetes.io/projected/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-kube-api-access-rbnzz\") pod \"redhat-operators-w89r4\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.711879 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-catalog-content\") pod \"redhat-operators-w89r4\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.711995 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbnzz\" (UniqueName: \"kubernetes.io/projected/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-kube-api-access-rbnzz\") pod \"redhat-operators-w89r4\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.712074 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-utilities\") pod \"redhat-operators-w89r4\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.712496 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-catalog-content\") pod \"redhat-operators-w89r4\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.712521 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-utilities\") pod \"redhat-operators-w89r4\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.734088 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbnzz\" (UniqueName: \"kubernetes.io/projected/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-kube-api-access-rbnzz\") pod \"redhat-operators-w89r4\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:55 crc kubenswrapper[5017]: I0129 07:53:55.800526 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:53:56 crc kubenswrapper[5017]: I0129 07:53:56.064517 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w89r4"] Jan 29 07:53:56 crc kubenswrapper[5017]: I0129 07:53:56.750309 5017 generic.go:334] "Generic (PLEG): container finished" podID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerID="074ee92b07bc17d7c164adfc5e2a39f905bde4d6218b5e1169ee8b66a1e6dac7" exitCode=0 Jan 29 07:53:56 crc kubenswrapper[5017]: I0129 07:53:56.750489 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w89r4" event={"ID":"606019d0-69a8-4cc1-b1f9-40e72a2a9f33","Type":"ContainerDied","Data":"074ee92b07bc17d7c164adfc5e2a39f905bde4d6218b5e1169ee8b66a1e6dac7"} Jan 29 07:53:56 crc kubenswrapper[5017]: I0129 07:53:56.750808 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w89r4" event={"ID":"606019d0-69a8-4cc1-b1f9-40e72a2a9f33","Type":"ContainerStarted","Data":"d93103dc0d85d84d97df87cd75c5e2651815c7be367981d829961c43d835e50c"} Jan 29 07:53:57 crc kubenswrapper[5017]: I0129 07:53:57.762253 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w89r4" event={"ID":"606019d0-69a8-4cc1-b1f9-40e72a2a9f33","Type":"ContainerStarted","Data":"4179d0c73e5e3a7a03048b8b3020ed6db11e43c4fd46d33149e530f263473c58"} Jan 29 07:53:58 crc kubenswrapper[5017]: I0129 07:53:58.774177 5017 generic.go:334] "Generic (PLEG): container finished" podID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerID="4179d0c73e5e3a7a03048b8b3020ed6db11e43c4fd46d33149e530f263473c58" exitCode=0 Jan 29 07:53:58 crc kubenswrapper[5017]: I0129 07:53:58.774265 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w89r4" event={"ID":"606019d0-69a8-4cc1-b1f9-40e72a2a9f33","Type":"ContainerDied","Data":"4179d0c73e5e3a7a03048b8b3020ed6db11e43c4fd46d33149e530f263473c58"} Jan 29 07:53:59 crc kubenswrapper[5017]: I0129 07:53:59.790432 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w89r4" event={"ID":"606019d0-69a8-4cc1-b1f9-40e72a2a9f33","Type":"ContainerStarted","Data":"4b2e520f3c27c5a4e81f174a8ad973c8ea09313199910453cdd402f0e6b31d3e"} Jan 29 07:54:05 crc kubenswrapper[5017]: I0129 07:54:05.801794 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:54:05 crc kubenswrapper[5017]: I0129 07:54:05.802636 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:54:05 crc kubenswrapper[5017]: I0129 07:54:05.854978 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:54:05 crc kubenswrapper[5017]: I0129 07:54:05.882872 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w89r4" podStartSLOduration=8.461464596 podStartE2EDuration="10.882850001s" podCreationTimestamp="2026-01-29 07:53:55 +0000 UTC" firstStartedPulling="2026-01-29 07:53:56.753048731 +0000 UTC m=+4723.127496341" lastFinishedPulling="2026-01-29 07:53:59.174434136 +0000 UTC m=+4725.548881746" observedRunningTime="2026-01-29 07:53:59.82106652 +0000 UTC m=+4726.195514150" watchObservedRunningTime="2026-01-29 07:54:05.882850001 +0000 UTC m=+4732.257297611" Jan 29 07:54:05 crc kubenswrapper[5017]: I0129 07:54:05.901769 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:54:06 crc kubenswrapper[5017]: I0129 07:54:06.100743 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w89r4"] Jan 29 07:54:07 crc kubenswrapper[5017]: I0129 07:54:07.868390 5017 generic.go:334] "Generic (PLEG): container finished" podID="aba4a07e-d542-4c62-b2bf-414140c4715f" containerID="593c0177c1e2d9728f5c61945a6155d29a67b3b581090dff93b77eabb9301d58" exitCode=0 Jan 29 07:54:07 crc kubenswrapper[5017]: I0129 07:54:07.868486 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aba4a07e-d542-4c62-b2bf-414140c4715f","Type":"ContainerDied","Data":"593c0177c1e2d9728f5c61945a6155d29a67b3b581090dff93b77eabb9301d58"} Jan 29 07:54:07 crc kubenswrapper[5017]: I0129 07:54:07.869582 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w89r4" podUID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerName="registry-server" containerID="cri-o://4b2e520f3c27c5a4e81f174a8ad973c8ea09313199910453cdd402f0e6b31d3e" gracePeriod=2 Jan 29 07:54:08 crc kubenswrapper[5017]: I0129 07:54:08.880883 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aba4a07e-d542-4c62-b2bf-414140c4715f","Type":"ContainerStarted","Data":"4fbc2cef4eb0e4900ffe9772c7a9d4e95ace5f9372b53cbb507c27f1b6713e61"} Jan 29 07:54:08 crc kubenswrapper[5017]: I0129 07:54:08.881993 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 07:54:08 crc kubenswrapper[5017]: I0129 07:54:08.888359 5017 generic.go:334] "Generic (PLEG): container finished" podID="2105bda0-b02e-49b2-9024-5ae1c94a9753" containerID="f0854688b38e924aa4f55a8076d015cb75e4d5dbc5b369641f4bd2d6e45e3f1c" exitCode=0 Jan 29 07:54:08 crc kubenswrapper[5017]: I0129 07:54:08.888451 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2105bda0-b02e-49b2-9024-5ae1c94a9753","Type":"ContainerDied","Data":"f0854688b38e924aa4f55a8076d015cb75e4d5dbc5b369641f4bd2d6e45e3f1c"} Jan 29 07:54:08 crc kubenswrapper[5017]: I0129 07:54:08.892411 5017 generic.go:334] "Generic (PLEG): container finished" podID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerID="4b2e520f3c27c5a4e81f174a8ad973c8ea09313199910453cdd402f0e6b31d3e" exitCode=0 Jan 29 07:54:08 crc kubenswrapper[5017]: I0129 07:54:08.892474 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w89r4" event={"ID":"606019d0-69a8-4cc1-b1f9-40e72a2a9f33","Type":"ContainerDied","Data":"4b2e520f3c27c5a4e81f174a8ad973c8ea09313199910453cdd402f0e6b31d3e"} Jan 29 07:54:08 crc kubenswrapper[5017]: I0129 07:54:08.923886 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.923853446 podStartE2EDuration="36.923853446s" podCreationTimestamp="2026-01-29 07:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:54:08.911856981 +0000 UTC m=+4735.286304611" watchObservedRunningTime="2026-01-29 07:54:08.923853446 +0000 UTC m=+4735.298301056" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.046643 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.159526 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-catalog-content\") pod \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.159732 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbnzz\" (UniqueName: \"kubernetes.io/projected/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-kube-api-access-rbnzz\") pod \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.159821 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-utilities\") pod \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\" (UID: \"606019d0-69a8-4cc1-b1f9-40e72a2a9f33\") " Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.160952 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-utilities" (OuterVolumeSpecName: "utilities") pod "606019d0-69a8-4cc1-b1f9-40e72a2a9f33" (UID: "606019d0-69a8-4cc1-b1f9-40e72a2a9f33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.168042 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-kube-api-access-rbnzz" (OuterVolumeSpecName: "kube-api-access-rbnzz") pod "606019d0-69a8-4cc1-b1f9-40e72a2a9f33" (UID: "606019d0-69a8-4cc1-b1f9-40e72a2a9f33"). InnerVolumeSpecName "kube-api-access-rbnzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.263631 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbnzz\" (UniqueName: \"kubernetes.io/projected/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-kube-api-access-rbnzz\") on node \"crc\" DevicePath \"\"" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.264174 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.904398 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2105bda0-b02e-49b2-9024-5ae1c94a9753","Type":"ContainerStarted","Data":"6f47816a24f4c140854adf9f57396e4ebf06dc4f6d1d6c23e7b73666a0595317"} Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.905301 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.908219 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w89r4" event={"ID":"606019d0-69a8-4cc1-b1f9-40e72a2a9f33","Type":"ContainerDied","Data":"d93103dc0d85d84d97df87cd75c5e2651815c7be367981d829961c43d835e50c"} Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.908270 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w89r4" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.908280 5017 scope.go:117] "RemoveContainer" containerID="4b2e520f3c27c5a4e81f174a8ad973c8ea09313199910453cdd402f0e6b31d3e" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.936123 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.936096042 podStartE2EDuration="36.936096042s" podCreationTimestamp="2026-01-29 07:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:54:09.932803132 +0000 UTC m=+4736.307250742" watchObservedRunningTime="2026-01-29 07:54:09.936096042 +0000 UTC m=+4736.310543652" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.936638 5017 scope.go:117] "RemoveContainer" containerID="4179d0c73e5e3a7a03048b8b3020ed6db11e43c4fd46d33149e530f263473c58" Jan 29 07:54:09 crc kubenswrapper[5017]: I0129 07:54:09.960629 5017 scope.go:117] "RemoveContainer" containerID="074ee92b07bc17d7c164adfc5e2a39f905bde4d6218b5e1169ee8b66a1e6dac7" Jan 29 07:54:10 crc kubenswrapper[5017]: I0129 07:54:10.294033 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "606019d0-69a8-4cc1-b1f9-40e72a2a9f33" (UID: "606019d0-69a8-4cc1-b1f9-40e72a2a9f33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:54:10 crc kubenswrapper[5017]: I0129 07:54:10.383172 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/606019d0-69a8-4cc1-b1f9-40e72a2a9f33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:54:10 crc kubenswrapper[5017]: I0129 07:54:10.532305 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w89r4"] Jan 29 07:54:10 crc kubenswrapper[5017]: I0129 07:54:10.548988 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w89r4"] Jan 29 07:54:12 crc kubenswrapper[5017]: I0129 07:54:12.326932 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" path="/var/lib/kubelet/pods/606019d0-69a8-4cc1-b1f9-40e72a2a9f33/volumes" Jan 29 07:54:22 crc kubenswrapper[5017]: I0129 07:54:22.898157 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 07:54:23 crc kubenswrapper[5017]: I0129 07:54:23.934550 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:54:26 crc kubenswrapper[5017]: I0129 07:54:26.539561 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:54:26 crc kubenswrapper[5017]: I0129 07:54:26.540050 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.217197 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 29 07:54:35 crc kubenswrapper[5017]: E0129 07:54:35.218187 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerName="extract-utilities" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.218207 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerName="extract-utilities" Jan 29 07:54:35 crc kubenswrapper[5017]: E0129 07:54:35.218222 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerName="extract-content" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.218230 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerName="extract-content" Jan 29 07:54:35 crc kubenswrapper[5017]: E0129 07:54:35.218258 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerName="registry-server" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.218267 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerName="registry-server" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.218427 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="606019d0-69a8-4cc1-b1f9-40e72a2a9f33" containerName="registry-server" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.219035 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.222198 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pd95q" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.229111 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.298364 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zng\" (UniqueName: \"kubernetes.io/projected/716dea68-702b-4d0f-a3ff-8d53a7d7d571-kube-api-access-j6zng\") pod \"mariadb-client\" (UID: \"716dea68-702b-4d0f-a3ff-8d53a7d7d571\") " pod="openstack/mariadb-client" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.399914 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zng\" (UniqueName: \"kubernetes.io/projected/716dea68-702b-4d0f-a3ff-8d53a7d7d571-kube-api-access-j6zng\") pod \"mariadb-client\" (UID: \"716dea68-702b-4d0f-a3ff-8d53a7d7d571\") " pod="openstack/mariadb-client" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.422868 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6zng\" (UniqueName: \"kubernetes.io/projected/716dea68-702b-4d0f-a3ff-8d53a7d7d571-kube-api-access-j6zng\") pod \"mariadb-client\" (UID: \"716dea68-702b-4d0f-a3ff-8d53a7d7d571\") " pod="openstack/mariadb-client" Jan 29 07:54:35 crc kubenswrapper[5017]: I0129 07:54:35.542338 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:54:36 crc kubenswrapper[5017]: I0129 07:54:36.093647 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:54:36 crc kubenswrapper[5017]: I0129 07:54:36.100662 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:54:36 crc kubenswrapper[5017]: I0129 07:54:36.116410 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"716dea68-702b-4d0f-a3ff-8d53a7d7d571","Type":"ContainerStarted","Data":"96590c5ecf48bf45ec6a4b7030da67e45770b351f6360577ff7f3b21dd76ef53"} Jan 29 07:54:37 crc kubenswrapper[5017]: I0129 07:54:37.127393 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"716dea68-702b-4d0f-a3ff-8d53a7d7d571","Type":"ContainerStarted","Data":"52ffdc89f2cfeb4f89a04453c5fd40526b1be0bb2dbd9fe5f957d8bc08774015"} Jan 29 07:54:37 crc kubenswrapper[5017]: I0129 07:54:37.157517 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.647915856 podStartE2EDuration="2.157474398s" podCreationTimestamp="2026-01-29 07:54:35 +0000 UTC" firstStartedPulling="2026-01-29 07:54:36.100365668 +0000 UTC m=+4762.474813278" lastFinishedPulling="2026-01-29 07:54:36.60992421 +0000 UTC m=+4762.984371820" observedRunningTime="2026-01-29 07:54:37.149008879 +0000 UTC m=+4763.523456499" watchObservedRunningTime="2026-01-29 07:54:37.157474398 +0000 UTC m=+4763.531922018" Jan 29 07:54:50 crc kubenswrapper[5017]: I0129 07:54:50.898088 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:54:50 crc kubenswrapper[5017]: I0129 07:54:50.899312 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="716dea68-702b-4d0f-a3ff-8d53a7d7d571" containerName="mariadb-client" containerID="cri-o://52ffdc89f2cfeb4f89a04453c5fd40526b1be0bb2dbd9fe5f957d8bc08774015" gracePeriod=30 Jan 29 07:54:51 crc kubenswrapper[5017]: I0129 07:54:51.250642 5017 generic.go:334] "Generic (PLEG): container finished" podID="716dea68-702b-4d0f-a3ff-8d53a7d7d571" containerID="52ffdc89f2cfeb4f89a04453c5fd40526b1be0bb2dbd9fe5f957d8bc08774015" exitCode=143 Jan 29 07:54:51 crc kubenswrapper[5017]: I0129 07:54:51.250735 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"716dea68-702b-4d0f-a3ff-8d53a7d7d571","Type":"ContainerDied","Data":"52ffdc89f2cfeb4f89a04453c5fd40526b1be0bb2dbd9fe5f957d8bc08774015"} Jan 29 07:54:51 crc kubenswrapper[5017]: I0129 07:54:51.482085 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:54:51 crc kubenswrapper[5017]: I0129 07:54:51.630576 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6zng\" (UniqueName: \"kubernetes.io/projected/716dea68-702b-4d0f-a3ff-8d53a7d7d571-kube-api-access-j6zng\") pod \"716dea68-702b-4d0f-a3ff-8d53a7d7d571\" (UID: \"716dea68-702b-4d0f-a3ff-8d53a7d7d571\") " Jan 29 07:54:51 crc kubenswrapper[5017]: I0129 07:54:51.646574 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716dea68-702b-4d0f-a3ff-8d53a7d7d571-kube-api-access-j6zng" (OuterVolumeSpecName: "kube-api-access-j6zng") pod "716dea68-702b-4d0f-a3ff-8d53a7d7d571" (UID: "716dea68-702b-4d0f-a3ff-8d53a7d7d571"). InnerVolumeSpecName "kube-api-access-j6zng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:54:51 crc kubenswrapper[5017]: I0129 07:54:51.734854 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6zng\" (UniqueName: \"kubernetes.io/projected/716dea68-702b-4d0f-a3ff-8d53a7d7d571-kube-api-access-j6zng\") on node \"crc\" DevicePath \"\"" Jan 29 07:54:52 crc kubenswrapper[5017]: I0129 07:54:52.260747 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"716dea68-702b-4d0f-a3ff-8d53a7d7d571","Type":"ContainerDied","Data":"96590c5ecf48bf45ec6a4b7030da67e45770b351f6360577ff7f3b21dd76ef53"} Jan 29 07:54:52 crc kubenswrapper[5017]: I0129 07:54:52.260812 5017 scope.go:117] "RemoveContainer" containerID="52ffdc89f2cfeb4f89a04453c5fd40526b1be0bb2dbd9fe5f957d8bc08774015" Jan 29 07:54:52 crc kubenswrapper[5017]: I0129 07:54:52.260946 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:54:52 crc kubenswrapper[5017]: I0129 07:54:52.300300 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:54:52 crc kubenswrapper[5017]: I0129 07:54:52.308190 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:54:52 crc kubenswrapper[5017]: I0129 07:54:52.328153 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="716dea68-702b-4d0f-a3ff-8d53a7d7d571" path="/var/lib/kubelet/pods/716dea68-702b-4d0f-a3ff-8d53a7d7d571/volumes" Jan 29 07:54:56 crc kubenswrapper[5017]: I0129 07:54:56.539100 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:54:56 crc kubenswrapper[5017]: I0129 07:54:56.540148 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:55:23 crc kubenswrapper[5017]: I0129 07:55:23.245520 5017 scope.go:117] "RemoveContainer" containerID="fb3bfd7444f0544e3eb2675f3a7ef90731a676452b1ccf1ffe46ef0e9bbebe60" Jan 29 07:55:26 crc kubenswrapper[5017]: I0129 07:55:26.539387 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:55:26 crc kubenswrapper[5017]: I0129 07:55:26.539995 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:55:26 crc kubenswrapper[5017]: I0129 07:55:26.540059 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:55:26 crc kubenswrapper[5017]: I0129 07:55:26.541634 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"566051f95af55c3167bfb24eecc73af401c9e2cd360bef8281baf73b9b65699e"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:55:26 crc kubenswrapper[5017]: I0129 07:55:26.541805 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://566051f95af55c3167bfb24eecc73af401c9e2cd360bef8281baf73b9b65699e" gracePeriod=600 Jan 29 07:55:27 crc kubenswrapper[5017]: I0129 07:55:27.587543 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="566051f95af55c3167bfb24eecc73af401c9e2cd360bef8281baf73b9b65699e" exitCode=0 Jan 29 07:55:27 crc kubenswrapper[5017]: I0129 07:55:27.587606 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"566051f95af55c3167bfb24eecc73af401c9e2cd360bef8281baf73b9b65699e"} Jan 29 07:55:27 crc kubenswrapper[5017]: I0129 07:55:27.588421 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300"} Jan 29 07:55:27 crc kubenswrapper[5017]: I0129 07:55:27.588444 5017 scope.go:117] "RemoveContainer" containerID="ff0ef22b38a904235ce937687a5299a84b9a93c980c779f3d410e345948f7f25" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.438460 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4t5qj"] Jan 29 07:57:06 crc kubenswrapper[5017]: E0129 07:57:06.439734 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716dea68-702b-4d0f-a3ff-8d53a7d7d571" containerName="mariadb-client" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.439751 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="716dea68-702b-4d0f-a3ff-8d53a7d7d571" containerName="mariadb-client" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.439971 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="716dea68-702b-4d0f-a3ff-8d53a7d7d571" containerName="mariadb-client" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.441383 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.458817 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4t5qj"] Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.525746 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwnn9\" (UniqueName: \"kubernetes.io/projected/1f1d207d-015b-4483-a744-325171ed4e47-kube-api-access-vwnn9\") pod \"community-operators-4t5qj\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.526304 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-utilities\") pod \"community-operators-4t5qj\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.526343 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-catalog-content\") pod \"community-operators-4t5qj\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.627819 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwnn9\" (UniqueName: \"kubernetes.io/projected/1f1d207d-015b-4483-a744-325171ed4e47-kube-api-access-vwnn9\") pod \"community-operators-4t5qj\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.627923 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-utilities\") pod \"community-operators-4t5qj\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.627986 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-catalog-content\") pod \"community-operators-4t5qj\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.628615 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-catalog-content\") pod \"community-operators-4t5qj\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.628686 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-utilities\") pod \"community-operators-4t5qj\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.646251 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bntz2"] Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.648448 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.677736 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwnn9\" (UniqueName: \"kubernetes.io/projected/1f1d207d-015b-4483-a744-325171ed4e47-kube-api-access-vwnn9\") pod \"community-operators-4t5qj\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.693301 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bntz2"] Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.729682 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-catalog-content\") pod \"certified-operators-bntz2\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.729744 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdx5\" (UniqueName: \"kubernetes.io/projected/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-kube-api-access-hzdx5\") pod \"certified-operators-bntz2\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.729774 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-utilities\") pod \"certified-operators-bntz2\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.777528 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.832048 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-catalog-content\") pod \"certified-operators-bntz2\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.832118 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdx5\" (UniqueName: \"kubernetes.io/projected/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-kube-api-access-hzdx5\") pod \"certified-operators-bntz2\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.832147 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-utilities\") pod \"certified-operators-bntz2\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.832892 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-utilities\") pod \"certified-operators-bntz2\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.833200 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-catalog-content\") pod \"certified-operators-bntz2\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.879199 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdx5\" (UniqueName: \"kubernetes.io/projected/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-kube-api-access-hzdx5\") pod \"certified-operators-bntz2\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:06 crc kubenswrapper[5017]: I0129 07:57:06.972485 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:07 crc kubenswrapper[5017]: I0129 07:57:07.326210 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4t5qj"] Jan 29 07:57:08 crc kubenswrapper[5017]: I0129 07:57:07.698789 5017 generic.go:334] "Generic (PLEG): container finished" podID="1f1d207d-015b-4483-a744-325171ed4e47" containerID="5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e" exitCode=0 Jan 29 07:57:08 crc kubenswrapper[5017]: I0129 07:57:07.700123 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t5qj" event={"ID":"1f1d207d-015b-4483-a744-325171ed4e47","Type":"ContainerDied","Data":"5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e"} Jan 29 07:57:08 crc kubenswrapper[5017]: I0129 07:57:07.700207 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t5qj" event={"ID":"1f1d207d-015b-4483-a744-325171ed4e47","Type":"ContainerStarted","Data":"2dce1f7226e4bb4292a6e3242c1cda59b5c0cb97e15eb97663310783e1c34041"} Jan 29 07:57:08 crc kubenswrapper[5017]: I0129 07:57:07.834468 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bntz2"] Jan 29 07:57:08 crc kubenswrapper[5017]: I0129 07:57:08.708764 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t5qj" event={"ID":"1f1d207d-015b-4483-a744-325171ed4e47","Type":"ContainerStarted","Data":"849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc"} Jan 29 07:57:08 crc kubenswrapper[5017]: I0129 07:57:08.711332 5017 generic.go:334] "Generic (PLEG): container finished" podID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerID="98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b" exitCode=0 Jan 29 07:57:08 crc kubenswrapper[5017]: I0129 07:57:08.711373 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bntz2" event={"ID":"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c","Type":"ContainerDied","Data":"98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b"} Jan 29 07:57:08 crc kubenswrapper[5017]: I0129 07:57:08.711398 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bntz2" event={"ID":"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c","Type":"ContainerStarted","Data":"e35006d7d132316bdee57dad6ea92adb0800bfa3fe2c03e833a2092b9366bf19"} Jan 29 07:57:09 crc kubenswrapper[5017]: I0129 07:57:09.722851 5017 generic.go:334] "Generic (PLEG): container finished" podID="1f1d207d-015b-4483-a744-325171ed4e47" containerID="849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc" exitCode=0 Jan 29 07:57:09 crc kubenswrapper[5017]: I0129 07:57:09.722935 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t5qj" event={"ID":"1f1d207d-015b-4483-a744-325171ed4e47","Type":"ContainerDied","Data":"849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc"} Jan 29 07:57:09 crc kubenswrapper[5017]: I0129 07:57:09.727923 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bntz2" event={"ID":"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c","Type":"ContainerStarted","Data":"d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3"} Jan 29 07:57:10 crc kubenswrapper[5017]: I0129 07:57:10.742770 5017 generic.go:334] "Generic (PLEG): container finished" podID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerID="d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3" exitCode=0 Jan 29 07:57:10 crc kubenswrapper[5017]: I0129 07:57:10.742878 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bntz2" event={"ID":"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c","Type":"ContainerDied","Data":"d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3"} Jan 29 07:57:10 crc kubenswrapper[5017]: I0129 07:57:10.747009 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t5qj" event={"ID":"1f1d207d-015b-4483-a744-325171ed4e47","Type":"ContainerStarted","Data":"b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822"} Jan 29 07:57:10 crc kubenswrapper[5017]: I0129 07:57:10.822302 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4t5qj" podStartSLOduration=2.322216381 podStartE2EDuration="4.8222728s" podCreationTimestamp="2026-01-29 07:57:06 +0000 UTC" firstStartedPulling="2026-01-29 07:57:07.702663883 +0000 UTC m=+4914.077111493" lastFinishedPulling="2026-01-29 07:57:10.202720302 +0000 UTC m=+4916.577167912" observedRunningTime="2026-01-29 07:57:10.796865275 +0000 UTC m=+4917.171312885" watchObservedRunningTime="2026-01-29 07:57:10.8222728 +0000 UTC m=+4917.196720410" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.038743 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4bj"] Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.040270 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.112926 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4bj"] Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.166885 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pr9g\" (UniqueName: \"kubernetes.io/projected/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-kube-api-access-9pr9g\") pod \"redhat-marketplace-dc4bj\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.167003 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-catalog-content\") pod \"redhat-marketplace-dc4bj\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.167083 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-utilities\") pod \"redhat-marketplace-dc4bj\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.268250 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-catalog-content\") pod \"redhat-marketplace-dc4bj\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.268339 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-utilities\") pod \"redhat-marketplace-dc4bj\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.268419 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pr9g\" (UniqueName: \"kubernetes.io/projected/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-kube-api-access-9pr9g\") pod \"redhat-marketplace-dc4bj\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.269158 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-catalog-content\") pod \"redhat-marketplace-dc4bj\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.269208 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-utilities\") pod \"redhat-marketplace-dc4bj\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.299658 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pr9g\" (UniqueName: \"kubernetes.io/projected/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-kube-api-access-9pr9g\") pod \"redhat-marketplace-dc4bj\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.356196 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.658826 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4bj"] Jan 29 07:57:11 crc kubenswrapper[5017]: W0129 07:57:11.671195 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e5b65cd_8fb5_4691_8c29_ff3f7a80e55f.slice/crio-3f95a6045ff32f7c5003f6e98ae81dc8c75d9803a206cf5e81cd6665bf083312 WatchSource:0}: Error finding container 3f95a6045ff32f7c5003f6e98ae81dc8c75d9803a206cf5e81cd6665bf083312: Status 404 returned error can't find the container with id 3f95a6045ff32f7c5003f6e98ae81dc8c75d9803a206cf5e81cd6665bf083312 Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.756562 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4bj" event={"ID":"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f","Type":"ContainerStarted","Data":"3f95a6045ff32f7c5003f6e98ae81dc8c75d9803a206cf5e81cd6665bf083312"} Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.760902 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bntz2" event={"ID":"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c","Type":"ContainerStarted","Data":"e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f"} Jan 29 07:57:11 crc kubenswrapper[5017]: I0129 07:57:11.785733 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bntz2" podStartSLOduration=3.343157431 podStartE2EDuration="5.785679955s" podCreationTimestamp="2026-01-29 07:57:06 +0000 UTC" firstStartedPulling="2026-01-29 07:57:08.713343081 +0000 UTC m=+4915.087790691" lastFinishedPulling="2026-01-29 07:57:11.155865605 +0000 UTC m=+4917.530313215" observedRunningTime="2026-01-29 07:57:11.78504356 +0000 UTC m=+4918.159491190" watchObservedRunningTime="2026-01-29 07:57:11.785679955 +0000 UTC m=+4918.160127575" Jan 29 07:57:12 crc kubenswrapper[5017]: I0129 07:57:12.767936 5017 generic.go:334] "Generic (PLEG): container finished" podID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerID="b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b" exitCode=0 Jan 29 07:57:12 crc kubenswrapper[5017]: I0129 07:57:12.769267 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4bj" event={"ID":"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f","Type":"ContainerDied","Data":"b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b"} Jan 29 07:57:13 crc kubenswrapper[5017]: I0129 07:57:13.779253 5017 generic.go:334] "Generic (PLEG): container finished" podID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerID="de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71" exitCode=0 Jan 29 07:57:13 crc kubenswrapper[5017]: I0129 07:57:13.779355 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4bj" event={"ID":"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f","Type":"ContainerDied","Data":"de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71"} Jan 29 07:57:14 crc kubenswrapper[5017]: I0129 07:57:14.792380 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4bj" event={"ID":"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f","Type":"ContainerStarted","Data":"45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce"} Jan 29 07:57:14 crc kubenswrapper[5017]: I0129 07:57:14.819367 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dc4bj" podStartSLOduration=2.392006174 podStartE2EDuration="3.819345859s" podCreationTimestamp="2026-01-29 07:57:11 +0000 UTC" firstStartedPulling="2026-01-29 07:57:12.77059153 +0000 UTC m=+4919.145039140" lastFinishedPulling="2026-01-29 07:57:14.197931215 +0000 UTC m=+4920.572378825" observedRunningTime="2026-01-29 07:57:14.812299456 +0000 UTC m=+4921.186747066" watchObservedRunningTime="2026-01-29 07:57:14.819345859 +0000 UTC m=+4921.193793469" Jan 29 07:57:16 crc kubenswrapper[5017]: I0129 07:57:16.777818 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:16 crc kubenswrapper[5017]: I0129 07:57:16.777902 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:16 crc kubenswrapper[5017]: I0129 07:57:16.856161 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:16 crc kubenswrapper[5017]: I0129 07:57:16.911850 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:16 crc kubenswrapper[5017]: I0129 07:57:16.972987 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:16 crc kubenswrapper[5017]: I0129 07:57:16.973057 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:17 crc kubenswrapper[5017]: I0129 07:57:17.016167 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:18 crc kubenswrapper[5017]: I0129 07:57:18.123897 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.357067 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.357611 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.401341 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.427456 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4t5qj"] Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.427805 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4t5qj" podUID="1f1d207d-015b-4483-a744-325171ed4e47" containerName="registry-server" containerID="cri-o://b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822" gracePeriod=2 Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.835506 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.864264 5017 generic.go:334] "Generic (PLEG): container finished" podID="1f1d207d-015b-4483-a744-325171ed4e47" containerID="b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822" exitCode=0 Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.864362 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4t5qj" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.864364 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t5qj" event={"ID":"1f1d207d-015b-4483-a744-325171ed4e47","Type":"ContainerDied","Data":"b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822"} Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.864438 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4t5qj" event={"ID":"1f1d207d-015b-4483-a744-325171ed4e47","Type":"ContainerDied","Data":"2dce1f7226e4bb4292a6e3242c1cda59b5c0cb97e15eb97663310783e1c34041"} Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.864465 5017 scope.go:117] "RemoveContainer" containerID="b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.891825 5017 scope.go:117] "RemoveContainer" containerID="849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.913558 5017 scope.go:117] "RemoveContainer" containerID="5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.926228 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.968628 5017 scope.go:117] "RemoveContainer" containerID="b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.969752 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-utilities\") pod \"1f1d207d-015b-4483-a744-325171ed4e47\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.970032 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwnn9\" (UniqueName: \"kubernetes.io/projected/1f1d207d-015b-4483-a744-325171ed4e47-kube-api-access-vwnn9\") pod \"1f1d207d-015b-4483-a744-325171ed4e47\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.970112 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-catalog-content\") pod \"1f1d207d-015b-4483-a744-325171ed4e47\" (UID: \"1f1d207d-015b-4483-a744-325171ed4e47\") " Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.971139 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-utilities" (OuterVolumeSpecName: "utilities") pod "1f1d207d-015b-4483-a744-325171ed4e47" (UID: "1f1d207d-015b-4483-a744-325171ed4e47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.979344 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1d207d-015b-4483-a744-325171ed4e47-kube-api-access-vwnn9" (OuterVolumeSpecName: "kube-api-access-vwnn9") pod "1f1d207d-015b-4483-a744-325171ed4e47" (UID: "1f1d207d-015b-4483-a744-325171ed4e47"). InnerVolumeSpecName "kube-api-access-vwnn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:57:21 crc kubenswrapper[5017]: E0129 07:57:21.979483 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822\": container with ID starting with b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822 not found: ID does not exist" containerID="b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.979535 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822"} err="failed to get container status \"b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822\": rpc error: code = NotFound desc = could not find container \"b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822\": container with ID starting with b172814af8f868b4857b81ce89fd5963da1b50352fdd17eaae4b6defb5d21822 not found: ID does not exist" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.979565 5017 scope.go:117] "RemoveContainer" containerID="849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc" Jan 29 07:57:21 crc kubenswrapper[5017]: E0129 07:57:21.980354 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc\": container with ID starting with 849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc not found: ID does not exist" containerID="849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.980428 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc"} err="failed to get container status \"849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc\": rpc error: code = NotFound desc = could not find container \"849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc\": container with ID starting with 849351c60ad11d7feefd93d7fe3438975deef87cdaeae92ff261df4e13c2fbfc not found: ID does not exist" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.980480 5017 scope.go:117] "RemoveContainer" containerID="5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e" Jan 29 07:57:21 crc kubenswrapper[5017]: E0129 07:57:21.980981 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e\": container with ID starting with 5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e not found: ID does not exist" containerID="5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e" Jan 29 07:57:21 crc kubenswrapper[5017]: I0129 07:57:21.981023 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e"} err="failed to get container status \"5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e\": rpc error: code = NotFound desc = could not find container \"5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e\": container with ID starting with 5ae952de06828fc32686dabf4c82d8c780bc91414e1f6f2bbce57ab820b5582e not found: ID does not exist" Jan 29 07:57:22 crc kubenswrapper[5017]: I0129 07:57:22.033386 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f1d207d-015b-4483-a744-325171ed4e47" (UID: "1f1d207d-015b-4483-a744-325171ed4e47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:57:22 crc kubenswrapper[5017]: I0129 07:57:22.072428 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:57:22 crc kubenswrapper[5017]: I0129 07:57:22.072480 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwnn9\" (UniqueName: \"kubernetes.io/projected/1f1d207d-015b-4483-a744-325171ed4e47-kube-api-access-vwnn9\") on node \"crc\" DevicePath \"\"" Jan 29 07:57:22 crc kubenswrapper[5017]: I0129 07:57:22.072492 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1d207d-015b-4483-a744-325171ed4e47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:57:22 crc kubenswrapper[5017]: I0129 07:57:22.213148 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4t5qj"] Jan 29 07:57:22 crc kubenswrapper[5017]: I0129 07:57:22.220367 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4t5qj"] Jan 29 07:57:22 crc kubenswrapper[5017]: I0129 07:57:22.325318 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1d207d-015b-4483-a744-325171ed4e47" path="/var/lib/kubelet/pods/1f1d207d-015b-4483-a744-325171ed4e47/volumes" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.429492 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bntz2"] Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.431147 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bntz2" podUID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerName="registry-server" containerID="cri-o://e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f" gracePeriod=2 Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.834905 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.895513 5017 generic.go:334] "Generic (PLEG): container finished" podID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerID="e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f" exitCode=0 Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.895589 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bntz2" event={"ID":"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c","Type":"ContainerDied","Data":"e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f"} Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.895639 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bntz2" event={"ID":"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c","Type":"ContainerDied","Data":"e35006d7d132316bdee57dad6ea92adb0800bfa3fe2c03e833a2092b9366bf19"} Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.895679 5017 scope.go:117] "RemoveContainer" containerID="e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.895698 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bntz2" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.905258 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-catalog-content\") pod \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.905326 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzdx5\" (UniqueName: \"kubernetes.io/projected/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-kube-api-access-hzdx5\") pod \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.905499 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-utilities\") pod \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\" (UID: \"14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c\") " Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.906779 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-utilities" (OuterVolumeSpecName: "utilities") pod "14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" (UID: "14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.917362 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-kube-api-access-hzdx5" (OuterVolumeSpecName: "kube-api-access-hzdx5") pod "14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" (UID: "14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c"). InnerVolumeSpecName "kube-api-access-hzdx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.919945 5017 scope.go:117] "RemoveContainer" containerID="d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.956034 5017 scope.go:117] "RemoveContainer" containerID="98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.957482 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" (UID: "14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.984713 5017 scope.go:117] "RemoveContainer" containerID="e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f" Jan 29 07:57:23 crc kubenswrapper[5017]: E0129 07:57:23.985293 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f\": container with ID starting with e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f not found: ID does not exist" containerID="e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.985336 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f"} err="failed to get container status \"e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f\": rpc error: code = NotFound desc = could not find container \"e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f\": container with ID starting with e46d348f22fc80e3e3abad23218e1d7bdedc6956fa3c3a7b5234dd9707b47b8f not found: ID does not exist" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.985368 5017 scope.go:117] "RemoveContainer" containerID="d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3" Jan 29 07:57:23 crc kubenswrapper[5017]: E0129 07:57:23.985751 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3\": container with ID starting with d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3 not found: ID does not exist" containerID="d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.985827 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3"} err="failed to get container status \"d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3\": rpc error: code = NotFound desc = could not find container \"d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3\": container with ID starting with d5161a671bd6ede82479168c36b640522ccf176678536000cea44a45d117c1f3 not found: ID does not exist" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.985871 5017 scope.go:117] "RemoveContainer" containerID="98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b" Jan 29 07:57:23 crc kubenswrapper[5017]: E0129 07:57:23.987192 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b\": container with ID starting with 98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b not found: ID does not exist" containerID="98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b" Jan 29 07:57:23 crc kubenswrapper[5017]: I0129 07:57:23.987291 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b"} err="failed to get container status \"98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b\": rpc error: code = NotFound desc = could not find container \"98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b\": container with ID starting with 98b8e318de3d709f72161d90621f62a021b11bbaa68f5461f9323eecca96cc6b not found: ID does not exist" Jan 29 07:57:24 crc kubenswrapper[5017]: I0129 07:57:24.007987 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:57:24 crc kubenswrapper[5017]: I0129 07:57:24.008044 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzdx5\" (UniqueName: \"kubernetes.io/projected/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-kube-api-access-hzdx5\") on node \"crc\" DevicePath \"\"" Jan 29 07:57:24 crc kubenswrapper[5017]: I0129 07:57:24.008062 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:57:24 crc kubenswrapper[5017]: I0129 07:57:24.243787 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bntz2"] Jan 29 07:57:24 crc kubenswrapper[5017]: I0129 07:57:24.254153 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bntz2"] Jan 29 07:57:24 crc kubenswrapper[5017]: I0129 07:57:24.330460 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" path="/var/lib/kubelet/pods/14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c/volumes" Jan 29 07:57:26 crc kubenswrapper[5017]: I0129 07:57:26.539393 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:57:26 crc kubenswrapper[5017]: I0129 07:57:26.539979 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.227931 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4bj"] Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.230068 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dc4bj" podUID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerName="registry-server" containerID="cri-o://45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce" gracePeriod=2 Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.646097 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.797394 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-utilities\") pod \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.797491 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pr9g\" (UniqueName: \"kubernetes.io/projected/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-kube-api-access-9pr9g\") pod \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.797648 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-catalog-content\") pod \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\" (UID: \"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f\") " Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.800009 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-utilities" (OuterVolumeSpecName: "utilities") pod "5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" (UID: "5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.808709 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-kube-api-access-9pr9g" (OuterVolumeSpecName: "kube-api-access-9pr9g") pod "5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" (UID: "5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f"). InnerVolumeSpecName "kube-api-access-9pr9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.827635 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" (UID: "5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.899684 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.899730 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pr9g\" (UniqueName: \"kubernetes.io/projected/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-kube-api-access-9pr9g\") on node \"crc\" DevicePath \"\"" Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.899740 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.966472 5017 generic.go:334] "Generic (PLEG): container finished" podID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerID="45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce" exitCode=0 Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.966563 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4bj" event={"ID":"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f","Type":"ContainerDied","Data":"45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce"} Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.966642 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dc4bj" event={"ID":"5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f","Type":"ContainerDied","Data":"3f95a6045ff32f7c5003f6e98ae81dc8c75d9803a206cf5e81cd6665bf083312"} Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.966633 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dc4bj" Jan 29 07:57:28 crc kubenswrapper[5017]: I0129 07:57:28.966667 5017 scope.go:117] "RemoveContainer" containerID="45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce" Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.004878 5017 scope.go:117] "RemoveContainer" containerID="de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71" Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.014069 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4bj"] Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.020322 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dc4bj"] Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.036951 5017 scope.go:117] "RemoveContainer" containerID="b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b" Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.055587 5017 scope.go:117] "RemoveContainer" containerID="45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce" Jan 29 07:57:29 crc kubenswrapper[5017]: E0129 07:57:29.056364 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce\": container with ID starting with 45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce not found: ID does not exist" containerID="45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce" Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.056426 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce"} err="failed to get container status \"45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce\": rpc error: code = NotFound desc = could not find container \"45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce\": container with ID starting with 45657e38ed26abc8e6afa17d48eaf6636b490238a7323370d1d907c60027f1ce not found: ID does not exist" Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.056472 5017 scope.go:117] "RemoveContainer" containerID="de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71" Jan 29 07:57:29 crc kubenswrapper[5017]: E0129 07:57:29.056908 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71\": container with ID starting with de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71 not found: ID does not exist" containerID="de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71" Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.056984 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71"} err="failed to get container status \"de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71\": rpc error: code = NotFound desc = could not find container \"de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71\": container with ID starting with de1081d8f4c5c685692779e9a8243e28e317300248ff7131359fdcb44fc46e71 not found: ID does not exist" Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.057026 5017 scope.go:117] "RemoveContainer" containerID="b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b" Jan 29 07:57:29 crc kubenswrapper[5017]: E0129 07:57:29.057455 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b\": container with ID starting with b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b not found: ID does not exist" containerID="b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b" Jan 29 07:57:29 crc kubenswrapper[5017]: I0129 07:57:29.057516 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b"} err="failed to get container status \"b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b\": rpc error: code = NotFound desc = could not find container \"b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b\": container with ID starting with b47888c662260a53c354593639c54ef0cdc7306c7f21ea66c6e4f9c840c6b11b not found: ID does not exist" Jan 29 07:57:30 crc kubenswrapper[5017]: I0129 07:57:30.327943 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" path="/var/lib/kubelet/pods/5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f/volumes" Jan 29 07:57:56 crc kubenswrapper[5017]: I0129 07:57:56.539797 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:57:56 crc kubenswrapper[5017]: I0129 07:57:56.540626 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:58:26 crc kubenswrapper[5017]: I0129 07:58:26.539234 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:58:26 crc kubenswrapper[5017]: I0129 07:58:26.540127 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:58:26 crc kubenswrapper[5017]: I0129 07:58:26.540185 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 07:58:26 crc kubenswrapper[5017]: I0129 07:58:26.541165 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:58:26 crc kubenswrapper[5017]: I0129 07:58:26.541230 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" gracePeriod=600 Jan 29 07:58:26 crc kubenswrapper[5017]: E0129 07:58:26.668184 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:58:27 crc kubenswrapper[5017]: I0129 07:58:27.499586 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" exitCode=0 Jan 29 07:58:27 crc kubenswrapper[5017]: I0129 07:58:27.499663 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300"} Jan 29 07:58:27 crc kubenswrapper[5017]: I0129 07:58:27.499721 5017 scope.go:117] "RemoveContainer" containerID="566051f95af55c3167bfb24eecc73af401c9e2cd360bef8281baf73b9b65699e" Jan 29 07:58:27 crc kubenswrapper[5017]: I0129 07:58:27.500772 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 07:58:27 crc kubenswrapper[5017]: E0129 07:58:27.501408 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:58:41 crc kubenswrapper[5017]: I0129 07:58:41.316463 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 07:58:41 crc kubenswrapper[5017]: E0129 07:58:41.317631 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:58:54 crc kubenswrapper[5017]: I0129 07:58:54.320683 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 07:58:54 crc kubenswrapper[5017]: E0129 07:58:54.321837 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.694989 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 29 07:59:07 crc kubenswrapper[5017]: E0129 07:59:07.696226 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1d207d-015b-4483-a744-325171ed4e47" containerName="extract-utilities" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696248 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1d207d-015b-4483-a744-325171ed4e47" containerName="extract-utilities" Jan 29 07:59:07 crc kubenswrapper[5017]: E0129 07:59:07.696260 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerName="extract-utilities" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696269 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerName="extract-utilities" Jan 29 07:59:07 crc kubenswrapper[5017]: E0129 07:59:07.696288 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerName="extract-content" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696297 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerName="extract-content" Jan 29 07:59:07 crc kubenswrapper[5017]: E0129 07:59:07.696312 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerName="registry-server" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696318 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerName="registry-server" Jan 29 07:59:07 crc kubenswrapper[5017]: E0129 07:59:07.696327 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerName="registry-server" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696333 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerName="registry-server" Jan 29 07:59:07 crc kubenswrapper[5017]: E0129 07:59:07.696343 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerName="extract-content" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696352 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerName="extract-content" Jan 29 07:59:07 crc kubenswrapper[5017]: E0129 07:59:07.696361 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1d207d-015b-4483-a744-325171ed4e47" containerName="registry-server" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696367 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1d207d-015b-4483-a744-325171ed4e47" containerName="registry-server" Jan 29 07:59:07 crc kubenswrapper[5017]: E0129 07:59:07.696382 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerName="extract-utilities" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696388 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerName="extract-utilities" Jan 29 07:59:07 crc kubenswrapper[5017]: E0129 07:59:07.696399 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1d207d-015b-4483-a744-325171ed4e47" containerName="extract-content" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696405 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1d207d-015b-4483-a744-325171ed4e47" containerName="extract-content" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696580 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fa37e1-9c41-43fe-a1e2-d5b7d4268f0c" containerName="registry-server" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696596 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e5b65cd-8fb5-4691-8c29-ff3f7a80e55f" containerName="registry-server" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.696607 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1d207d-015b-4483-a744-325171ed4e47" containerName="registry-server" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.697435 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.700655 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pd95q" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.707465 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.846051 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbkz\" (UniqueName: \"kubernetes.io/projected/42541727-68fc-4f89-a968-7305509acd78-kube-api-access-5cbkz\") pod \"mariadb-copy-data\" (UID: \"42541727-68fc-4f89-a968-7305509acd78\") " pod="openstack/mariadb-copy-data" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.846113 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b70f5329-0f47-412d-8faf-8b9bccaabfd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b70f5329-0f47-412d-8faf-8b9bccaabfd0\") pod \"mariadb-copy-data\" (UID: \"42541727-68fc-4f89-a968-7305509acd78\") " pod="openstack/mariadb-copy-data" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.948452 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cbkz\" (UniqueName: \"kubernetes.io/projected/42541727-68fc-4f89-a968-7305509acd78-kube-api-access-5cbkz\") pod \"mariadb-copy-data\" (UID: \"42541727-68fc-4f89-a968-7305509acd78\") " pod="openstack/mariadb-copy-data" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.948571 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b70f5329-0f47-412d-8faf-8b9bccaabfd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b70f5329-0f47-412d-8faf-8b9bccaabfd0\") pod \"mariadb-copy-data\" (UID: \"42541727-68fc-4f89-a968-7305509acd78\") " pod="openstack/mariadb-copy-data" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.954521 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.954583 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b70f5329-0f47-412d-8faf-8b9bccaabfd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b70f5329-0f47-412d-8faf-8b9bccaabfd0\") pod \"mariadb-copy-data\" (UID: \"42541727-68fc-4f89-a968-7305509acd78\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1639ec1c795dff8b7f84f314f89bcf11235e0dfe366f44bfac27bb4ad76da76e/globalmount\"" pod="openstack/mariadb-copy-data" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.969220 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cbkz\" (UniqueName: \"kubernetes.io/projected/42541727-68fc-4f89-a968-7305509acd78-kube-api-access-5cbkz\") pod \"mariadb-copy-data\" (UID: \"42541727-68fc-4f89-a968-7305509acd78\") " pod="openstack/mariadb-copy-data" Jan 29 07:59:07 crc kubenswrapper[5017]: I0129 07:59:07.982985 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b70f5329-0f47-412d-8faf-8b9bccaabfd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b70f5329-0f47-412d-8faf-8b9bccaabfd0\") pod \"mariadb-copy-data\" (UID: \"42541727-68fc-4f89-a968-7305509acd78\") " pod="openstack/mariadb-copy-data" Jan 29 07:59:08 crc kubenswrapper[5017]: I0129 07:59:08.032743 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 29 07:59:08 crc kubenswrapper[5017]: I0129 07:59:08.628029 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 29 07:59:08 crc kubenswrapper[5017]: W0129 07:59:08.635791 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42541727_68fc_4f89_a968_7305509acd78.slice/crio-ee10fcf05f9c92bc8334b5e083189280be2a97d84e9ca534ee0885cf3b2eaf45 WatchSource:0}: Error finding container ee10fcf05f9c92bc8334b5e083189280be2a97d84e9ca534ee0885cf3b2eaf45: Status 404 returned error can't find the container with id ee10fcf05f9c92bc8334b5e083189280be2a97d84e9ca534ee0885cf3b2eaf45 Jan 29 07:59:08 crc kubenswrapper[5017]: I0129 07:59:08.920842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"42541727-68fc-4f89-a968-7305509acd78","Type":"ContainerStarted","Data":"f05a60c4b3125c3cf243c204ce31daaddd9da0857dd72067e93ac9ec5688ce00"} Jan 29 07:59:08 crc kubenswrapper[5017]: I0129 07:59:08.921316 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"42541727-68fc-4f89-a968-7305509acd78","Type":"ContainerStarted","Data":"ee10fcf05f9c92bc8334b5e083189280be2a97d84e9ca534ee0885cf3b2eaf45"} Jan 29 07:59:08 crc kubenswrapper[5017]: I0129 07:59:08.942100 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.942072617 podStartE2EDuration="2.942072617s" podCreationTimestamp="2026-01-29 07:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:59:08.93719355 +0000 UTC m=+5035.311641180" watchObservedRunningTime="2026-01-29 07:59:08.942072617 +0000 UTC m=+5035.316520227" Jan 29 07:59:09 crc kubenswrapper[5017]: I0129 07:59:09.316538 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 07:59:09 crc kubenswrapper[5017]: E0129 07:59:09.317136 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:59:11 crc kubenswrapper[5017]: I0129 07:59:11.653424 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:11 crc kubenswrapper[5017]: I0129 07:59:11.657078 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:59:11 crc kubenswrapper[5017]: I0129 07:59:11.664621 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:11 crc kubenswrapper[5017]: I0129 07:59:11.820236 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssb6v\" (UniqueName: \"kubernetes.io/projected/2cfd606e-331c-4a51-81dc-ccd98c7e3e6b-kube-api-access-ssb6v\") pod \"mariadb-client\" (UID: \"2cfd606e-331c-4a51-81dc-ccd98c7e3e6b\") " pod="openstack/mariadb-client" Jan 29 07:59:11 crc kubenswrapper[5017]: I0129 07:59:11.921689 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssb6v\" (UniqueName: \"kubernetes.io/projected/2cfd606e-331c-4a51-81dc-ccd98c7e3e6b-kube-api-access-ssb6v\") pod \"mariadb-client\" (UID: \"2cfd606e-331c-4a51-81dc-ccd98c7e3e6b\") " pod="openstack/mariadb-client" Jan 29 07:59:11 crc kubenswrapper[5017]: I0129 07:59:11.945290 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssb6v\" (UniqueName: \"kubernetes.io/projected/2cfd606e-331c-4a51-81dc-ccd98c7e3e6b-kube-api-access-ssb6v\") pod \"mariadb-client\" (UID: \"2cfd606e-331c-4a51-81dc-ccd98c7e3e6b\") " pod="openstack/mariadb-client" Jan 29 07:59:11 crc kubenswrapper[5017]: I0129 07:59:11.997652 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:59:12 crc kubenswrapper[5017]: I0129 07:59:12.460881 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:12 crc kubenswrapper[5017]: W0129 07:59:12.465705 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cfd606e_331c_4a51_81dc_ccd98c7e3e6b.slice/crio-887e17ab1445e24496ba95499a69e9afa9b24ad77b1bef20f52c967356ec02f2 WatchSource:0}: Error finding container 887e17ab1445e24496ba95499a69e9afa9b24ad77b1bef20f52c967356ec02f2: Status 404 returned error can't find the container with id 887e17ab1445e24496ba95499a69e9afa9b24ad77b1bef20f52c967356ec02f2 Jan 29 07:59:12 crc kubenswrapper[5017]: I0129 07:59:12.954918 5017 generic.go:334] "Generic (PLEG): container finished" podID="2cfd606e-331c-4a51-81dc-ccd98c7e3e6b" containerID="a0741905ab8c241d32170ce94719ce5472071a9695bf7c0176377964e99a094f" exitCode=0 Jan 29 07:59:12 crc kubenswrapper[5017]: I0129 07:59:12.954998 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"2cfd606e-331c-4a51-81dc-ccd98c7e3e6b","Type":"ContainerDied","Data":"a0741905ab8c241d32170ce94719ce5472071a9695bf7c0176377964e99a094f"} Jan 29 07:59:12 crc kubenswrapper[5017]: I0129 07:59:12.955385 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"2cfd606e-331c-4a51-81dc-ccd98c7e3e6b","Type":"ContainerStarted","Data":"887e17ab1445e24496ba95499a69e9afa9b24ad77b1bef20f52c967356ec02f2"} Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.275129 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.307771 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_2cfd606e-331c-4a51-81dc-ccd98c7e3e6b/mariadb-client/0.log" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.358519 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.366976 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.370764 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssb6v\" (UniqueName: \"kubernetes.io/projected/2cfd606e-331c-4a51-81dc-ccd98c7e3e6b-kube-api-access-ssb6v\") pod \"2cfd606e-331c-4a51-81dc-ccd98c7e3e6b\" (UID: \"2cfd606e-331c-4a51-81dc-ccd98c7e3e6b\") " Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.393215 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfd606e-331c-4a51-81dc-ccd98c7e3e6b-kube-api-access-ssb6v" (OuterVolumeSpecName: "kube-api-access-ssb6v") pod "2cfd606e-331c-4a51-81dc-ccd98c7e3e6b" (UID: "2cfd606e-331c-4a51-81dc-ccd98c7e3e6b"). InnerVolumeSpecName "kube-api-access-ssb6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.473116 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.473588 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssb6v\" (UniqueName: \"kubernetes.io/projected/2cfd606e-331c-4a51-81dc-ccd98c7e3e6b-kube-api-access-ssb6v\") on node \"crc\" DevicePath \"\"" Jan 29 07:59:14 crc kubenswrapper[5017]: E0129 07:59:14.473686 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfd606e-331c-4a51-81dc-ccd98c7e3e6b" containerName="mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.473710 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfd606e-331c-4a51-81dc-ccd98c7e3e6b" containerName="mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.474035 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfd606e-331c-4a51-81dc-ccd98c7e3e6b" containerName="mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.474894 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.481982 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.496303 5017 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="2cfd606e-331c-4a51-81dc-ccd98c7e3e6b" podUID="0a719b41-1b68-457e-81c4-f7e16fbc348f" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.575552 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c9fn\" (UniqueName: \"kubernetes.io/projected/0a719b41-1b68-457e-81c4-f7e16fbc348f-kube-api-access-6c9fn\") pod \"mariadb-client\" (UID: \"0a719b41-1b68-457e-81c4-f7e16fbc348f\") " pod="openstack/mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.678683 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c9fn\" (UniqueName: \"kubernetes.io/projected/0a719b41-1b68-457e-81c4-f7e16fbc348f-kube-api-access-6c9fn\") pod \"mariadb-client\" (UID: \"0a719b41-1b68-457e-81c4-f7e16fbc348f\") " pod="openstack/mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.703044 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c9fn\" (UniqueName: \"kubernetes.io/projected/0a719b41-1b68-457e-81c4-f7e16fbc348f-kube-api-access-6c9fn\") pod \"mariadb-client\" (UID: \"0a719b41-1b68-457e-81c4-f7e16fbc348f\") " pod="openstack/mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.795096 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.980575 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="887e17ab1445e24496ba95499a69e9afa9b24ad77b1bef20f52c967356ec02f2" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.980650 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:59:14 crc kubenswrapper[5017]: I0129 07:59:14.985790 5017 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="2cfd606e-331c-4a51-81dc-ccd98c7e3e6b" podUID="0a719b41-1b68-457e-81c4-f7e16fbc348f" Jan 29 07:59:15 crc kubenswrapper[5017]: I0129 07:59:15.002254 5017 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="2cfd606e-331c-4a51-81dc-ccd98c7e3e6b" podUID="0a719b41-1b68-457e-81c4-f7e16fbc348f" Jan 29 07:59:15 crc kubenswrapper[5017]: I0129 07:59:15.261640 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:15 crc kubenswrapper[5017]: I0129 07:59:15.989933 5017 generic.go:334] "Generic (PLEG): container finished" podID="0a719b41-1b68-457e-81c4-f7e16fbc348f" containerID="26803d0137462548da50798083b8c3cc8fe0a816bdc6c92d494171e76650bf0d" exitCode=0 Jan 29 07:59:15 crc kubenswrapper[5017]: I0129 07:59:15.990387 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0a719b41-1b68-457e-81c4-f7e16fbc348f","Type":"ContainerDied","Data":"26803d0137462548da50798083b8c3cc8fe0a816bdc6c92d494171e76650bf0d"} Jan 29 07:59:15 crc kubenswrapper[5017]: I0129 07:59:15.990419 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0a719b41-1b68-457e-81c4-f7e16fbc348f","Type":"ContainerStarted","Data":"54d4d02e684d64cb1838886914cd9b1c161121fc4faa599a8378154422b3411d"} Jan 29 07:59:16 crc kubenswrapper[5017]: I0129 07:59:16.326623 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfd606e-331c-4a51-81dc-ccd98c7e3e6b" path="/var/lib/kubelet/pods/2cfd606e-331c-4a51-81dc-ccd98c7e3e6b/volumes" Jan 29 07:59:17 crc kubenswrapper[5017]: I0129 07:59:17.756173 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:59:17 crc kubenswrapper[5017]: I0129 07:59:17.777607 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_0a719b41-1b68-457e-81c4-f7e16fbc348f/mariadb-client/0.log" Jan 29 07:59:17 crc kubenswrapper[5017]: I0129 07:59:17.813840 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:17 crc kubenswrapper[5017]: I0129 07:59:17.820438 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 29 07:59:17 crc kubenswrapper[5017]: I0129 07:59:17.846724 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c9fn\" (UniqueName: \"kubernetes.io/projected/0a719b41-1b68-457e-81c4-f7e16fbc348f-kube-api-access-6c9fn\") pod \"0a719b41-1b68-457e-81c4-f7e16fbc348f\" (UID: \"0a719b41-1b68-457e-81c4-f7e16fbc348f\") " Jan 29 07:59:17 crc kubenswrapper[5017]: I0129 07:59:17.886774 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a719b41-1b68-457e-81c4-f7e16fbc348f-kube-api-access-6c9fn" (OuterVolumeSpecName: "kube-api-access-6c9fn") pod "0a719b41-1b68-457e-81c4-f7e16fbc348f" (UID: "0a719b41-1b68-457e-81c4-f7e16fbc348f"). InnerVolumeSpecName "kube-api-access-6c9fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:59:17 crc kubenswrapper[5017]: I0129 07:59:17.948534 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c9fn\" (UniqueName: \"kubernetes.io/projected/0a719b41-1b68-457e-81c4-f7e16fbc348f-kube-api-access-6c9fn\") on node \"crc\" DevicePath \"\"" Jan 29 07:59:18 crc kubenswrapper[5017]: I0129 07:59:18.007520 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d4d02e684d64cb1838886914cd9b1c161121fc4faa599a8378154422b3411d" Jan 29 07:59:18 crc kubenswrapper[5017]: I0129 07:59:18.007629 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 07:59:18 crc kubenswrapper[5017]: I0129 07:59:18.327560 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a719b41-1b68-457e-81c4-f7e16fbc348f" path="/var/lib/kubelet/pods/0a719b41-1b68-457e-81c4-f7e16fbc348f/volumes" Jan 29 07:59:23 crc kubenswrapper[5017]: I0129 07:59:23.402134 5017 scope.go:117] "RemoveContainer" containerID="5e26068ad8aad47aa698e5039e45501c78d8c2b17f4731196b324045eef3432c" Jan 29 07:59:24 crc kubenswrapper[5017]: I0129 07:59:24.321526 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 07:59:24 crc kubenswrapper[5017]: E0129 07:59:24.322134 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:59:39 crc kubenswrapper[5017]: I0129 07:59:39.316005 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 07:59:39 crc kubenswrapper[5017]: E0129 07:59:39.317189 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:59:54 crc kubenswrapper[5017]: I0129 07:59:54.326130 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 07:59:54 crc kubenswrapper[5017]: E0129 07:59:54.327543 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.236505 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 07:59:57 crc kubenswrapper[5017]: E0129 07:59:57.237797 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a719b41-1b68-457e-81c4-f7e16fbc348f" containerName="mariadb-client" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.237817 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a719b41-1b68-457e-81c4-f7e16fbc348f" containerName="mariadb-client" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.238034 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a719b41-1b68-457e-81c4-f7e16fbc348f" containerName="mariadb-client" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.239061 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.241338 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.241453 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2wd52" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.241348 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.254465 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.268551 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.270130 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.279784 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.284667 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.302168 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.322363 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.344921 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5665c337-2052-4480-b6e9-4ac2ae19a229\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5665c337-2052-4480-b6e9-4ac2ae19a229\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.345021 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwpwm\" (UniqueName: \"kubernetes.io/projected/eec52b57-cfbe-49e2-aa22-112f785bff7c-kube-api-access-zwpwm\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.345090 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eec52b57-cfbe-49e2-aa22-112f785bff7c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.345117 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec52b57-cfbe-49e2-aa22-112f785bff7c-config\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.345151 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eec52b57-cfbe-49e2-aa22-112f785bff7c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.345177 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec52b57-cfbe-49e2-aa22-112f785bff7c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.431468 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.434014 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.437339 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9z48t" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.437582 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.437685 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.447823 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eac0b731-771a-4164-a5a1-f17bad61fb30-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.447899 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2151d723-7e34-4b11-b8ed-4621c7075e80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2151d723-7e34-4b11-b8ed-4621c7075e80\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.447975 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eec52b57-cfbe-49e2-aa22-112f785bff7c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448004 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec52b57-cfbe-49e2-aa22-112f785bff7c-config\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448052 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448078 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eec52b57-cfbe-49e2-aa22-112f785bff7c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448100 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eac0b731-771a-4164-a5a1-f17bad61fb30-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448138 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec52b57-cfbe-49e2-aa22-112f785bff7c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448166 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-config\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448212 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac0b731-771a-4164-a5a1-f17bad61fb30-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448255 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c000751b-1a46-4a10-bd1f-83272b49afed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c000751b-1a46-4a10-bd1f-83272b49afed\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448286 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac0b731-771a-4164-a5a1-f17bad61fb30-config\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448327 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5665c337-2052-4480-b6e9-4ac2ae19a229\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5665c337-2052-4480-b6e9-4ac2ae19a229\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448367 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448414 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwpwm\" (UniqueName: \"kubernetes.io/projected/eec52b57-cfbe-49e2-aa22-112f785bff7c-kube-api-access-zwpwm\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448445 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl4zz\" (UniqueName: \"kubernetes.io/projected/eac0b731-771a-4164-a5a1-f17bad61fb30-kube-api-access-nl4zz\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448479 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52dtr\" (UniqueName: \"kubernetes.io/projected/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-kube-api-access-52dtr\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.448510 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.454093 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.455194 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eec52b57-cfbe-49e2-aa22-112f785bff7c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.456656 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.456683 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5665c337-2052-4480-b6e9-4ac2ae19a229\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5665c337-2052-4480-b6e9-4ac2ae19a229\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8c724b3f8768df0528ca8679a2d692a72d1ddad91e34ab12965cade9a514ebce/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.457125 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec52b57-cfbe-49e2-aa22-112f785bff7c-config\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.457420 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eec52b57-cfbe-49e2-aa22-112f785bff7c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.464387 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec52b57-cfbe-49e2-aa22-112f785bff7c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.470143 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.472542 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.477490 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwpwm\" (UniqueName: \"kubernetes.io/projected/eec52b57-cfbe-49e2-aa22-112f785bff7c-kube-api-access-zwpwm\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.491256 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.504249 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.505921 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.506221 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5665c337-2052-4480-b6e9-4ac2ae19a229\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5665c337-2052-4480-b6e9-4ac2ae19a229\") pod \"ovsdbserver-nb-0\" (UID: \"eec52b57-cfbe-49e2-aa22-112f785bff7c\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.514325 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550216 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550440 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-62b7a992-ef27-4834-9272-dd4a1cbb095f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62b7a992-ef27-4834-9272-dd4a1cbb095f\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550473 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9b2718-9b96-4d4b-ade8-5394392229f9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550505 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl4zz\" (UniqueName: \"kubernetes.io/projected/eac0b731-771a-4164-a5a1-f17bad61fb30-kube-api-access-nl4zz\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550531 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52dtr\" (UniqueName: \"kubernetes.io/projected/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-kube-api-access-52dtr\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550551 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a23012e-ce8c-4a9a-b812-f5fa91f22623-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550609 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0a23012e-ce8c-4a9a-b812-f5fa91f22623-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550631 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550676 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fc9b2718-9b96-4d4b-ade8-5394392229f9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550709 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eac0b731-771a-4164-a5a1-f17bad61fb30-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550757 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2151d723-7e34-4b11-b8ed-4621c7075e80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2151d723-7e34-4b11-b8ed-4621c7075e80\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550784 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qhsk\" (UniqueName: \"kubernetes.io/projected/fc9b2718-9b96-4d4b-ade8-5394392229f9-kube-api-access-5qhsk\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.550850 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9b2718-9b96-4d4b-ade8-5394392229f9-config\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.551108 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc9b2718-9b96-4d4b-ade8-5394392229f9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.551166 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.552148 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eac0b731-771a-4164-a5a1-f17bad61fb30-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.552178 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4kvf\" (UniqueName: \"kubernetes.io/projected/0a23012e-ce8c-4a9a-b812-f5fa91f22623-kube-api-access-n4kvf\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.552507 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.552532 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.553469 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-config\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.552234 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-config\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.554073 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac0b731-771a-4164-a5a1-f17bad61fb30-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.554110 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a23012e-ce8c-4a9a-b812-f5fa91f22623-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.554140 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a530a51d-05d9-4a18-8684-cae2fc12d16a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530a51d-05d9-4a18-8684-cae2fc12d16a\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.554151 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eac0b731-771a-4164-a5a1-f17bad61fb30-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.554163 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a23012e-ce8c-4a9a-b812-f5fa91f22623-config\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.554252 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c000751b-1a46-4a10-bd1f-83272b49afed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c000751b-1a46-4a10-bd1f-83272b49afed\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.554300 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac0b731-771a-4164-a5a1-f17bad61fb30-config\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.555222 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eac0b731-771a-4164-a5a1-f17bad61fb30-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.555466 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac0b731-771a-4164-a5a1-f17bad61fb30-config\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.556193 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.556227 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c000751b-1a46-4a10-bd1f-83272b49afed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c000751b-1a46-4a10-bd1f-83272b49afed\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e1f01e3cf26bdde5cd0715bee557eaee423a2ca76cf141f50e482f450065ab47/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.556340 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.556372 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2151d723-7e34-4b11-b8ed-4621c7075e80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2151d723-7e34-4b11-b8ed-4621c7075e80\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8e8f6bb24d0cc3c8ffb0af16a57bd5f0fabde48785cedea09611c4da3e1a0172/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.558174 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.558873 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac0b731-771a-4164-a5a1-f17bad61fb30-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.569927 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl4zz\" (UniqueName: \"kubernetes.io/projected/eac0b731-771a-4164-a5a1-f17bad61fb30-kube-api-access-nl4zz\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.570868 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.572152 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52dtr\" (UniqueName: \"kubernetes.io/projected/02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f-kube-api-access-52dtr\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.585198 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c000751b-1a46-4a10-bd1f-83272b49afed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c000751b-1a46-4a10-bd1f-83272b49afed\") pod \"ovsdbserver-nb-2\" (UID: \"eac0b731-771a-4164-a5a1-f17bad61fb30\") " pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.599129 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2151d723-7e34-4b11-b8ed-4621c7075e80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2151d723-7e34-4b11-b8ed-4621c7075e80\") pod \"ovsdbserver-nb-1\" (UID: \"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f\") " pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.602120 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.657323 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0a23012e-ce8c-4a9a-b812-f5fa91f22623-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.657901 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a23012e-ce8c-4a9a-b812-f5fa91f22623-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.657950 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fc9b2718-9b96-4d4b-ade8-5394392229f9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658028 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qhsk\" (UniqueName: \"kubernetes.io/projected/fc9b2718-9b96-4d4b-ade8-5394392229f9-kube-api-access-5qhsk\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658054 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0a23012e-ce8c-4a9a-b812-f5fa91f22623-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658073 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a10eac92-4703-47fd-b022-0dcca527b076-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658126 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9b2718-9b96-4d4b-ade8-5394392229f9-config\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658166 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc9b2718-9b96-4d4b-ade8-5394392229f9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658192 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dss2\" (UniqueName: \"kubernetes.io/projected/a10eac92-4703-47fd-b022-0dcca527b076-kube-api-access-9dss2\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658233 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4kvf\" (UniqueName: \"kubernetes.io/projected/0a23012e-ce8c-4a9a-b812-f5fa91f22623-kube-api-access-n4kvf\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658303 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a23012e-ce8c-4a9a-b812-f5fa91f22623-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658336 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a530a51d-05d9-4a18-8684-cae2fc12d16a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530a51d-05d9-4a18-8684-cae2fc12d16a\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658358 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a23012e-ce8c-4a9a-b812-f5fa91f22623-config\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658390 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7543f93-1868-4509-931c-6ec1a03131fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7543f93-1868-4509-931c-6ec1a03131fa\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658420 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a10eac92-4703-47fd-b022-0dcca527b076-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658468 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10eac92-4703-47fd-b022-0dcca527b076-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658509 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fc9b2718-9b96-4d4b-ade8-5394392229f9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658511 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-62b7a992-ef27-4834-9272-dd4a1cbb095f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62b7a992-ef27-4834-9272-dd4a1cbb095f\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658556 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9b2718-9b96-4d4b-ade8-5394392229f9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.658597 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10eac92-4703-47fd-b022-0dcca527b076-config\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.659258 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9b2718-9b96-4d4b-ade8-5394392229f9-config\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.660349 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc9b2718-9b96-4d4b-ade8-5394392229f9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.660682 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a23012e-ce8c-4a9a-b812-f5fa91f22623-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.660839 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a23012e-ce8c-4a9a-b812-f5fa91f22623-config\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.662949 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.663014 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a530a51d-05d9-4a18-8684-cae2fc12d16a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530a51d-05d9-4a18-8684-cae2fc12d16a\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a163d8d41e3022e840ffba5051f9df690022c82e0797b97f9848f58fee02b1e1/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.664916 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.665170 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-62b7a992-ef27-4834-9272-dd4a1cbb095f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62b7a992-ef27-4834-9272-dd4a1cbb095f\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6b9dcd011617b0819da3812934953f73333e86e711088bdceb57291d92d4fcb5/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.667257 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9b2718-9b96-4d4b-ade8-5394392229f9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.670545 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a23012e-ce8c-4a9a-b812-f5fa91f22623-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.686709 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qhsk\" (UniqueName: \"kubernetes.io/projected/fc9b2718-9b96-4d4b-ade8-5394392229f9-kube-api-access-5qhsk\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.688306 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4kvf\" (UniqueName: \"kubernetes.io/projected/0a23012e-ce8c-4a9a-b812-f5fa91f22623-kube-api-access-n4kvf\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.704710 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a530a51d-05d9-4a18-8684-cae2fc12d16a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a530a51d-05d9-4a18-8684-cae2fc12d16a\") pod \"ovsdbserver-sb-1\" (UID: \"0a23012e-ce8c-4a9a-b812-f5fa91f22623\") " pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.722267 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-62b7a992-ef27-4834-9272-dd4a1cbb095f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62b7a992-ef27-4834-9272-dd4a1cbb095f\") pod \"ovsdbserver-sb-0\" (UID: \"fc9b2718-9b96-4d4b-ade8-5394392229f9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.760435 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10eac92-4703-47fd-b022-0dcca527b076-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.760518 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10eac92-4703-47fd-b022-0dcca527b076-config\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.760587 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a10eac92-4703-47fd-b022-0dcca527b076-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.760620 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dss2\" (UniqueName: \"kubernetes.io/projected/a10eac92-4703-47fd-b022-0dcca527b076-kube-api-access-9dss2\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.760678 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7543f93-1868-4509-931c-6ec1a03131fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7543f93-1868-4509-931c-6ec1a03131fa\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.760704 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a10eac92-4703-47fd-b022-0dcca527b076-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.761609 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10eac92-4703-47fd-b022-0dcca527b076-config\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.762345 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a10eac92-4703-47fd-b022-0dcca527b076-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.762344 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a10eac92-4703-47fd-b022-0dcca527b076-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.763767 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.764838 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.764874 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7543f93-1868-4509-931c-6ec1a03131fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7543f93-1868-4509-931c-6ec1a03131fa\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ceadb3ea14d3777b0155d815b3fc096dc392ffa3b4e641031cb600f9ab25d930/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.766176 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10eac92-4703-47fd-b022-0dcca527b076-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.783809 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dss2\" (UniqueName: \"kubernetes.io/projected/a10eac92-4703-47fd-b022-0dcca527b076-kube-api-access-9dss2\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.812387 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7543f93-1868-4509-931c-6ec1a03131fa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7543f93-1868-4509-931c-6ec1a03131fa\") pod \"ovsdbserver-sb-2\" (UID: \"a10eac92-4703-47fd-b022-0dcca527b076\") " pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.845363 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.852336 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 29 07:59:57 crc kubenswrapper[5017]: I0129 07:59:57.883336 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 29 07:59:58 crc kubenswrapper[5017]: I0129 07:59:58.157876 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 07:59:58 crc kubenswrapper[5017]: I0129 07:59:58.272575 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 29 07:59:58 crc kubenswrapper[5017]: W0129 07:59:58.273078 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeac0b731_771a_4164_a5a1_f17bad61fb30.slice/crio-6921869d8d0e9f1fa1d3ea14b06544cf466df33dc241b2a8edb2a6208c00b874 WatchSource:0}: Error finding container 6921869d8d0e9f1fa1d3ea14b06544cf466df33dc241b2a8edb2a6208c00b874: Status 404 returned error can't find the container with id 6921869d8d0e9f1fa1d3ea14b06544cf466df33dc241b2a8edb2a6208c00b874 Jan 29 07:59:58 crc kubenswrapper[5017]: I0129 07:59:58.364725 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 29 07:59:58 crc kubenswrapper[5017]: I0129 07:59:58.369374 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"eec52b57-cfbe-49e2-aa22-112f785bff7c","Type":"ContainerStarted","Data":"304e808fa371e5039d0db6736cd414a9f862ea0ba34cf9ae3795614c2a456a4c"} Jan 29 07:59:58 crc kubenswrapper[5017]: I0129 07:59:58.373716 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"eac0b731-771a-4164-a5a1-f17bad61fb30","Type":"ContainerStarted","Data":"6921869d8d0e9f1fa1d3ea14b06544cf466df33dc241b2a8edb2a6208c00b874"} Jan 29 07:59:58 crc kubenswrapper[5017]: W0129 07:59:58.379941 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a23012e_ce8c_4a9a_b812_f5fa91f22623.slice/crio-7828f6853acfbc39cef9f42709d493d60be1534200680a9e1f56275d922c76b4 WatchSource:0}: Error finding container 7828f6853acfbc39cef9f42709d493d60be1534200680a9e1f56275d922c76b4: Status 404 returned error can't find the container with id 7828f6853acfbc39cef9f42709d493d60be1534200680a9e1f56275d922c76b4 Jan 29 07:59:58 crc kubenswrapper[5017]: I0129 07:59:58.632015 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.390684 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"eec52b57-cfbe-49e2-aa22-112f785bff7c","Type":"ContainerStarted","Data":"ac3b1e4a10429c60655d7e41c1cc1ef92e9fac3cf6372b432b77753474f78c48"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.391244 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"eec52b57-cfbe-49e2-aa22-112f785bff7c","Type":"ContainerStarted","Data":"1d0363b0e1e5fcbfdafa908e499eb4230d92c1c61cb5cdf1a709e72048717eb3"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.394148 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"eac0b731-771a-4164-a5a1-f17bad61fb30","Type":"ContainerStarted","Data":"d1ddd9cfb8511d9b45d995b61fa891d500e50204e18e4e12b82d7669ef28415c"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.394201 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"eac0b731-771a-4164-a5a1-f17bad61fb30","Type":"ContainerStarted","Data":"5e4809f7eccf9dbd6f9cb4deea21674cc8d184d048905dae1c6c26d8aa44a1ed"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.400298 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"0a23012e-ce8c-4a9a-b812-f5fa91f22623","Type":"ContainerStarted","Data":"e3f2d82ce901e6ab2955b4f4b26fcaa97bcb29d6dd7bcb8727d83d278da30c74"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.400354 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"0a23012e-ce8c-4a9a-b812-f5fa91f22623","Type":"ContainerStarted","Data":"be6e28b527cd76a64151542358126189cce1242885ef6f2eb36fdaa4e80231a4"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.400371 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"0a23012e-ce8c-4a9a-b812-f5fa91f22623","Type":"ContainerStarted","Data":"7828f6853acfbc39cef9f42709d493d60be1534200680a9e1f56275d922c76b4"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.405589 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f","Type":"ContainerStarted","Data":"19a7575ea895e30a6d3c21a1b97f7a6274a446ef5823cabd04e0a1eef6a05016"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.405637 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f","Type":"ContainerStarted","Data":"8dbc55d730dbe31f9f320f138b01c394ccc3956d5652ef29e9e65ff2e394c3f5"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.405650 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f","Type":"ContainerStarted","Data":"ddc9217f18f68a28350e859cc7bedf0c9e232be859ded03e76c6d947bd8c47fa"} Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.416476 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.416447573 podStartE2EDuration="3.416447573s" podCreationTimestamp="2026-01-29 07:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:59:59.410399487 +0000 UTC m=+5085.784847097" watchObservedRunningTime="2026-01-29 07:59:59.416447573 +0000 UTC m=+5085.790895183" Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.440944 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.44090624 podStartE2EDuration="3.44090624s" podCreationTimestamp="2026-01-29 07:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:59:59.42840738 +0000 UTC m=+5085.802854990" watchObservedRunningTime="2026-01-29 07:59:59.44090624 +0000 UTC m=+5085.815353870" Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.457949 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.457921868 podStartE2EDuration="3.457921868s" podCreationTimestamp="2026-01-29 07:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:59:59.449383863 +0000 UTC m=+5085.823831463" watchObservedRunningTime="2026-01-29 07:59:59.457921868 +0000 UTC m=+5085.832369488" Jan 29 07:59:59 crc kubenswrapper[5017]: W0129 07:59:59.473270 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc9b2718_9b96_4d4b_ade8_5394392229f9.slice/crio-3847821498ed605fda9443e4130fe7b08cff342d0f0874fedba55b9214495542 WatchSource:0}: Error finding container 3847821498ed605fda9443e4130fe7b08cff342d0f0874fedba55b9214495542: Status 404 returned error can't find the container with id 3847821498ed605fda9443e4130fe7b08cff342d0f0874fedba55b9214495542 Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.473382 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.476263 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.476233088 podStartE2EDuration="3.476233088s" podCreationTimestamp="2026-01-29 07:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:59:59.47379249 +0000 UTC m=+5085.848240100" watchObservedRunningTime="2026-01-29 07:59:59.476233088 +0000 UTC m=+5085.850680698" Jan 29 07:59:59 crc kubenswrapper[5017]: I0129 07:59:59.645229 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.155601 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w"] Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.158194 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.161561 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.162456 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.165103 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w"] Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.320236 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9rq9\" (UniqueName: \"kubernetes.io/projected/aa539280-1219-4242-8b9a-69ef09b61530-kube-api-access-g9rq9\") pod \"collect-profiles-29494560-z4d8w\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.320322 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa539280-1219-4242-8b9a-69ef09b61530-secret-volume\") pod \"collect-profiles-29494560-z4d8w\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.320443 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa539280-1219-4242-8b9a-69ef09b61530-config-volume\") pod \"collect-profiles-29494560-z4d8w\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.415624 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fc9b2718-9b96-4d4b-ade8-5394392229f9","Type":"ContainerStarted","Data":"d8f5f6d12d1c1ae25da8bd935b711bff164d21dbb064f2fb6d820f2e50f8b10a"} Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.415676 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fc9b2718-9b96-4d4b-ade8-5394392229f9","Type":"ContainerStarted","Data":"cde710a627174755937779f33d5b0cdf86c0b76f02c5ed779d060af2e3d407b7"} Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.415691 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fc9b2718-9b96-4d4b-ade8-5394392229f9","Type":"ContainerStarted","Data":"3847821498ed605fda9443e4130fe7b08cff342d0f0874fedba55b9214495542"} Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.419168 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"a10eac92-4703-47fd-b022-0dcca527b076","Type":"ContainerStarted","Data":"f4897fdfecf2892465bffffb415dc2ff9599c45922fc248648d6b078e14f43c8"} Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.419262 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"a10eac92-4703-47fd-b022-0dcca527b076","Type":"ContainerStarted","Data":"b7a159d5e6c59fc5bc913b49519a60c7d58e8c542ff007ccf82b85db6ee6a6e3"} Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.419276 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"a10eac92-4703-47fd-b022-0dcca527b076","Type":"ContainerStarted","Data":"ae43fd6582bbf856522d6fff9ca5b80290a08a9ff09c80aa92e980657384733e"} Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.421632 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa539280-1219-4242-8b9a-69ef09b61530-secret-volume\") pod \"collect-profiles-29494560-z4d8w\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.421718 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa539280-1219-4242-8b9a-69ef09b61530-config-volume\") pod \"collect-profiles-29494560-z4d8w\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.421886 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9rq9\" (UniqueName: \"kubernetes.io/projected/aa539280-1219-4242-8b9a-69ef09b61530-kube-api-access-g9rq9\") pod \"collect-profiles-29494560-z4d8w\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.422919 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa539280-1219-4242-8b9a-69ef09b61530-config-volume\") pod \"collect-profiles-29494560-z4d8w\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.430437 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa539280-1219-4242-8b9a-69ef09b61530-secret-volume\") pod \"collect-profiles-29494560-z4d8w\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.439861 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9rq9\" (UniqueName: \"kubernetes.io/projected/aa539280-1219-4242-8b9a-69ef09b61530-kube-api-access-g9rq9\") pod \"collect-profiles-29494560-z4d8w\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.446668 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.446645806 podStartE2EDuration="4.446645806s" podCreationTimestamp="2026-01-29 07:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:00:00.44138472 +0000 UTC m=+5086.815832330" watchObservedRunningTime="2026-01-29 08:00:00.446645806 +0000 UTC m=+5086.821093416" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.493885 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.571625 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.602567 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.765356 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.846497 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.853771 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.884516 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.948217 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.948183458 podStartE2EDuration="4.948183458s" podCreationTimestamp="2026-01-29 07:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:00:00.466384501 +0000 UTC m=+5086.840832121" watchObservedRunningTime="2026-01-29 08:00:00.948183458 +0000 UTC m=+5087.322631068" Jan 29 08:00:00 crc kubenswrapper[5017]: I0129 08:00:00.955707 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w"] Jan 29 08:00:00 crc kubenswrapper[5017]: W0129 08:00:00.962695 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa539280_1219_4242_8b9a_69ef09b61530.slice/crio-3029031598a987feb59549d206b807ba948f6f3850d18d93cb8cbec1624e69e2 WatchSource:0}: Error finding container 3029031598a987feb59549d206b807ba948f6f3850d18d93cb8cbec1624e69e2: Status 404 returned error can't find the container with id 3029031598a987feb59549d206b807ba948f6f3850d18d93cb8cbec1624e69e2 Jan 29 08:00:01 crc kubenswrapper[5017]: I0129 08:00:01.431713 5017 generic.go:334] "Generic (PLEG): container finished" podID="aa539280-1219-4242-8b9a-69ef09b61530" containerID="72e3b9603cb9c971c389843f1acbd8f5e5d6e2e3ace0a0087a68893c902b9b3b" exitCode=0 Jan 29 08:00:01 crc kubenswrapper[5017]: I0129 08:00:01.432015 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" event={"ID":"aa539280-1219-4242-8b9a-69ef09b61530","Type":"ContainerDied","Data":"72e3b9603cb9c971c389843f1acbd8f5e5d6e2e3ace0a0087a68893c902b9b3b"} Jan 29 08:00:01 crc kubenswrapper[5017]: I0129 08:00:01.432078 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" event={"ID":"aa539280-1219-4242-8b9a-69ef09b61530","Type":"ContainerStarted","Data":"3029031598a987feb59549d206b807ba948f6f3850d18d93cb8cbec1624e69e2"} Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.572240 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.603313 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.733846 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.765454 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.845947 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.853387 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.873091 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa539280-1219-4242-8b9a-69ef09b61530-config-volume\") pod \"aa539280-1219-4242-8b9a-69ef09b61530\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.873332 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa539280-1219-4242-8b9a-69ef09b61530-secret-volume\") pod \"aa539280-1219-4242-8b9a-69ef09b61530\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.873449 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9rq9\" (UniqueName: \"kubernetes.io/projected/aa539280-1219-4242-8b9a-69ef09b61530-kube-api-access-g9rq9\") pod \"aa539280-1219-4242-8b9a-69ef09b61530\" (UID: \"aa539280-1219-4242-8b9a-69ef09b61530\") " Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.874427 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa539280-1219-4242-8b9a-69ef09b61530-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa539280-1219-4242-8b9a-69ef09b61530" (UID: "aa539280-1219-4242-8b9a-69ef09b61530"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.875481 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa539280-1219-4242-8b9a-69ef09b61530-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.881503 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa539280-1219-4242-8b9a-69ef09b61530-kube-api-access-g9rq9" (OuterVolumeSpecName: "kube-api-access-g9rq9") pod "aa539280-1219-4242-8b9a-69ef09b61530" (UID: "aa539280-1219-4242-8b9a-69ef09b61530"). InnerVolumeSpecName "kube-api-access-g9rq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.881898 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa539280-1219-4242-8b9a-69ef09b61530-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa539280-1219-4242-8b9a-69ef09b61530" (UID: "aa539280-1219-4242-8b9a-69ef09b61530"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.883843 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.978672 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa539280-1219-4242-8b9a-69ef09b61530-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:02 crc kubenswrapper[5017]: I0129 08:00:02.978730 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9rq9\" (UniqueName: \"kubernetes.io/projected/aa539280-1219-4242-8b9a-69ef09b61530-kube-api-access-g9rq9\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.463617 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.464776 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w" event={"ID":"aa539280-1219-4242-8b9a-69ef09b61530","Type":"ContainerDied","Data":"3029031598a987feb59549d206b807ba948f6f3850d18d93cb8cbec1624e69e2"} Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.464823 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3029031598a987feb59549d206b807ba948f6f3850d18d93cb8cbec1624e69e2" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.619212 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.652289 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.666513 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.710988 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.819766 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.870776 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9"] Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.879387 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-69fd9"] Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.921350 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.921441 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.931221 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dfc458f9-62lbc"] Jan 29 08:00:03 crc kubenswrapper[5017]: E0129 08:00:03.931679 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa539280-1219-4242-8b9a-69ef09b61530" containerName="collect-profiles" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.931705 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa539280-1219-4242-8b9a-69ef09b61530" containerName="collect-profiles" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.931894 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa539280-1219-4242-8b9a-69ef09b61530" containerName="collect-profiles" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.933304 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.942688 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.955657 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc458f9-62lbc"] Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.970041 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 29 08:00:03 crc kubenswrapper[5017]: I0129 08:00:03.993072 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.001746 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdnp\" (UniqueName: \"kubernetes.io/projected/a05d3c64-107b-49c2-b034-260e328ee015-kube-api-access-dpdnp\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.001847 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-config\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.001954 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-dns-svc\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.002032 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.014064 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.103721 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-dns-svc\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.103810 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.103890 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpdnp\" (UniqueName: \"kubernetes.io/projected/a05d3c64-107b-49c2-b034-260e328ee015-kube-api-access-dpdnp\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.103934 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-config\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.106625 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-dns-svc\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.107813 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.109768 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-config\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.135071 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpdnp\" (UniqueName: \"kubernetes.io/projected/a05d3c64-107b-49c2-b034-260e328ee015-kube-api-access-dpdnp\") pod \"dnsmasq-dns-74dfc458f9-62lbc\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.261312 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.361043 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad" path="/var/lib/kubelet/pods/eb82d2cb-9ac6-4e55-8ea1-d1b2b23d78ad/volumes" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.447449 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc458f9-62lbc"] Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.483158 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848d7d5c67-28tlc"] Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.493363 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.497943 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.518784 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848d7d5c67-28tlc"] Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.574406 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.607068 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.721527 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-sb\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.721649 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-nb\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.721784 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-dns-svc\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.721918 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-config\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.721943 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44t5\" (UniqueName: \"kubernetes.io/projected/0e3a2105-3f64-43fd-986a-7d91a611b845-kube-api-access-w44t5\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.823301 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-config\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.823355 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44t5\" (UniqueName: \"kubernetes.io/projected/0e3a2105-3f64-43fd-986a-7d91a611b845-kube-api-access-w44t5\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.823430 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-sb\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.823458 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-nb\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.823504 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-dns-svc\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.824622 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-config\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.824622 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-dns-svc\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.825326 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-nb\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.825539 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-sb\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.862819 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44t5\" (UniqueName: \"kubernetes.io/projected/0e3a2105-3f64-43fd-986a-7d91a611b845-kube-api-access-w44t5\") pod \"dnsmasq-dns-848d7d5c67-28tlc\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:04 crc kubenswrapper[5017]: I0129 08:00:04.927131 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc458f9-62lbc"] Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.128257 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.486833 5017 generic.go:334] "Generic (PLEG): container finished" podID="a05d3c64-107b-49c2-b034-260e328ee015" containerID="bb04b351c1f1ec342482d3dd29da9418be69fdbe9895bd268e37d6d8a831d615" exitCode=0 Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.486997 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" event={"ID":"a05d3c64-107b-49c2-b034-260e328ee015","Type":"ContainerDied","Data":"bb04b351c1f1ec342482d3dd29da9418be69fdbe9895bd268e37d6d8a831d615"} Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.487456 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" event={"ID":"a05d3c64-107b-49c2-b034-260e328ee015","Type":"ContainerStarted","Data":"654ce7498f6afe0d468014d16a17638aa82a857156e705a11f5128bed4395456"} Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.627580 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848d7d5c67-28tlc"] Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.803916 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.844563 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpdnp\" (UniqueName: \"kubernetes.io/projected/a05d3c64-107b-49c2-b034-260e328ee015-kube-api-access-dpdnp\") pod \"a05d3c64-107b-49c2-b034-260e328ee015\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.844801 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-dns-svc\") pod \"a05d3c64-107b-49c2-b034-260e328ee015\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.844980 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-ovsdbserver-nb\") pod \"a05d3c64-107b-49c2-b034-260e328ee015\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.845026 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-config\") pod \"a05d3c64-107b-49c2-b034-260e328ee015\" (UID: \"a05d3c64-107b-49c2-b034-260e328ee015\") " Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.851794 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05d3c64-107b-49c2-b034-260e328ee015-kube-api-access-dpdnp" (OuterVolumeSpecName: "kube-api-access-dpdnp") pod "a05d3c64-107b-49c2-b034-260e328ee015" (UID: "a05d3c64-107b-49c2-b034-260e328ee015"). InnerVolumeSpecName "kube-api-access-dpdnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.872038 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-config" (OuterVolumeSpecName: "config") pod "a05d3c64-107b-49c2-b034-260e328ee015" (UID: "a05d3c64-107b-49c2-b034-260e328ee015"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.872453 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a05d3c64-107b-49c2-b034-260e328ee015" (UID: "a05d3c64-107b-49c2-b034-260e328ee015"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.876612 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a05d3c64-107b-49c2-b034-260e328ee015" (UID: "a05d3c64-107b-49c2-b034-260e328ee015"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.947154 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.947213 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.947227 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05d3c64-107b-49c2-b034-260e328ee015-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:05 crc kubenswrapper[5017]: I0129 08:00:05.947241 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpdnp\" (UniqueName: \"kubernetes.io/projected/a05d3c64-107b-49c2-b034-260e328ee015-kube-api-access-dpdnp\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:06 crc kubenswrapper[5017]: E0129 08:00:06.487517 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05d3c64_107b_49c2_b034_260e328ee015.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05d3c64_107b_49c2_b034_260e328ee015.slice/crio-654ce7498f6afe0d468014d16a17638aa82a857156e705a11f5128bed4395456\": RecentStats: unable to find data in memory cache]" Jan 29 08:00:06 crc kubenswrapper[5017]: I0129 08:00:06.513473 5017 generic.go:334] "Generic (PLEG): container finished" podID="0e3a2105-3f64-43fd-986a-7d91a611b845" containerID="6b090cb752f37187e942be7b07fe28991e5c0b14eabc92496a9072ffa22421e0" exitCode=0 Jan 29 08:00:06 crc kubenswrapper[5017]: I0129 08:00:06.513604 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" event={"ID":"0e3a2105-3f64-43fd-986a-7d91a611b845","Type":"ContainerDied","Data":"6b090cb752f37187e942be7b07fe28991e5c0b14eabc92496a9072ffa22421e0"} Jan 29 08:00:06 crc kubenswrapper[5017]: I0129 08:00:06.513658 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" event={"ID":"0e3a2105-3f64-43fd-986a-7d91a611b845","Type":"ContainerStarted","Data":"b58ebe51d2671f9a153b13bfbe45e95c919612fcffa054be0210a77799145d54"} Jan 29 08:00:06 crc kubenswrapper[5017]: I0129 08:00:06.518266 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" event={"ID":"a05d3c64-107b-49c2-b034-260e328ee015","Type":"ContainerDied","Data":"654ce7498f6afe0d468014d16a17638aa82a857156e705a11f5128bed4395456"} Jan 29 08:00:06 crc kubenswrapper[5017]: I0129 08:00:06.518393 5017 scope.go:117] "RemoveContainer" containerID="bb04b351c1f1ec342482d3dd29da9418be69fdbe9895bd268e37d6d8a831d615" Jan 29 08:00:06 crc kubenswrapper[5017]: I0129 08:00:06.518422 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc458f9-62lbc" Jan 29 08:00:06 crc kubenswrapper[5017]: I0129 08:00:06.612428 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc458f9-62lbc"] Jan 29 08:00:06 crc kubenswrapper[5017]: I0129 08:00:06.638235 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dfc458f9-62lbc"] Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.315595 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 29 08:00:07 crc kubenswrapper[5017]: E0129 08:00:07.317728 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05d3c64-107b-49c2-b034-260e328ee015" containerName="init" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.317778 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05d3c64-107b-49c2-b034-260e328ee015" containerName="init" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.317994 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05d3c64-107b-49c2-b034-260e328ee015" containerName="init" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.319144 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.321773 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.327827 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.380890 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb8kt\" (UniqueName: \"kubernetes.io/projected/e31ff6be-5757-4479-85a9-1fe9a40834a3-kube-api-access-lb8kt\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") " pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.381057 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/e31ff6be-5757-4479-85a9-1fe9a40834a3-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") " pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.381372 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b9e46e00-e9be-421e-bcbf-0b561dcfdd9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b9e46e00-e9be-421e-bcbf-0b561dcfdd9e\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") " pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.483910 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b9e46e00-e9be-421e-bcbf-0b561dcfdd9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b9e46e00-e9be-421e-bcbf-0b561dcfdd9e\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") " pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.484118 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb8kt\" (UniqueName: \"kubernetes.io/projected/e31ff6be-5757-4479-85a9-1fe9a40834a3-kube-api-access-lb8kt\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") " pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.484177 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/e31ff6be-5757-4479-85a9-1fe9a40834a3-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") " pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.490721 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/e31ff6be-5757-4479-85a9-1fe9a40834a3-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") " pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.496582 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.496645 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b9e46e00-e9be-421e-bcbf-0b561dcfdd9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b9e46e00-e9be-421e-bcbf-0b561dcfdd9e\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/383b7cf922d658d22bc3d2485e1e84a81daf6738fa47943b367b30989d83c5b6/globalmount\"" pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.504436 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb8kt\" (UniqueName: \"kubernetes.io/projected/e31ff6be-5757-4479-85a9-1fe9a40834a3-kube-api-access-lb8kt\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") " pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.531573 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" event={"ID":"0e3a2105-3f64-43fd-986a-7d91a611b845","Type":"ContainerStarted","Data":"21f901b2adbe83f245cd62b54f0de7c986d54eaa6d4a783ae3f353b966b4159e"} Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.532317 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.546921 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b9e46e00-e9be-421e-bcbf-0b561dcfdd9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b9e46e00-e9be-421e-bcbf-0b561dcfdd9e\") pod \"ovn-copy-data\" (UID: \"e31ff6be-5757-4479-85a9-1fe9a40834a3\") " pod="openstack/ovn-copy-data" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.561995 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" podStartSLOduration=3.561970175 podStartE2EDuration="3.561970175s" podCreationTimestamp="2026-01-29 08:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:00:07.556650328 +0000 UTC m=+5093.931097938" watchObservedRunningTime="2026-01-29 08:00:07.561970175 +0000 UTC m=+5093.936417785" Jan 29 08:00:07 crc kubenswrapper[5017]: I0129 08:00:07.646018 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 29 08:00:08 crc kubenswrapper[5017]: I0129 08:00:08.172907 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 29 08:00:08 crc kubenswrapper[5017]: I0129 08:00:08.177499 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:00:08 crc kubenswrapper[5017]: I0129 08:00:08.317274 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:00:08 crc kubenswrapper[5017]: E0129 08:00:08.317582 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:00:08 crc kubenswrapper[5017]: I0129 08:00:08.327781 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a05d3c64-107b-49c2-b034-260e328ee015" path="/var/lib/kubelet/pods/a05d3c64-107b-49c2-b034-260e328ee015/volumes" Jan 29 08:00:08 crc kubenswrapper[5017]: I0129 08:00:08.545703 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"e31ff6be-5757-4479-85a9-1fe9a40834a3","Type":"ContainerStarted","Data":"a362d5b67e684be385f2b2ebe0c74080586725e38e59db4fc029d981ce2fde01"} Jan 29 08:00:09 crc kubenswrapper[5017]: I0129 08:00:09.557147 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"e31ff6be-5757-4479-85a9-1fe9a40834a3","Type":"ContainerStarted","Data":"cb42e665f80863fdfc378f03c76875dabd8c80f73b7a97d1c1735ac7262c7224"} Jan 29 08:00:09 crc kubenswrapper[5017]: I0129 08:00:09.575367 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.005318418 podStartE2EDuration="3.575342103s" podCreationTimestamp="2026-01-29 08:00:06 +0000 UTC" firstStartedPulling="2026-01-29 08:00:08.177180196 +0000 UTC m=+5094.551627816" lastFinishedPulling="2026-01-29 08:00:08.747203891 +0000 UTC m=+5095.121651501" observedRunningTime="2026-01-29 08:00:09.571474871 +0000 UTC m=+5095.945922501" watchObservedRunningTime="2026-01-29 08:00:09.575342103 +0000 UTC m=+5095.949789713" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.406029 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.428031 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.433718 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2kcr2" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.435034 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.441281 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.453274 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.621315 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed6df609-936e-4744-b4e8-d1ad883e850d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.621640 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87n8\" (UniqueName: \"kubernetes.io/projected/ed6df609-936e-4744-b4e8-d1ad883e850d-kube-api-access-j87n8\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.621823 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6df609-936e-4744-b4e8-d1ad883e850d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.622000 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6df609-936e-4744-b4e8-d1ad883e850d-config\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.622147 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6df609-936e-4744-b4e8-d1ad883e850d-scripts\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.724781 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6df609-936e-4744-b4e8-d1ad883e850d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.725287 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6df609-936e-4744-b4e8-d1ad883e850d-config\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.725346 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6df609-936e-4744-b4e8-d1ad883e850d-scripts\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.725390 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed6df609-936e-4744-b4e8-d1ad883e850d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.725432 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87n8\" (UniqueName: \"kubernetes.io/projected/ed6df609-936e-4744-b4e8-d1ad883e850d-kube-api-access-j87n8\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.726279 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6df609-936e-4744-b4e8-d1ad883e850d-config\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.726309 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed6df609-936e-4744-b4e8-d1ad883e850d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.726627 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6df609-936e-4744-b4e8-d1ad883e850d-scripts\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.738026 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6df609-936e-4744-b4e8-d1ad883e850d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.744499 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87n8\" (UniqueName: \"kubernetes.io/projected/ed6df609-936e-4744-b4e8-d1ad883e850d-kube-api-access-j87n8\") pod \"ovn-northd-0\" (UID: \"ed6df609-936e-4744-b4e8-d1ad883e850d\") " pod="openstack/ovn-northd-0" Jan 29 08:00:14 crc kubenswrapper[5017]: I0129 08:00:14.759570 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.130232 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.202046 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-9w5ks"] Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.202438 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" podUID="64aefc1a-6f20-40e9-a8f5-767c661de180" containerName="dnsmasq-dns" containerID="cri-o://940cbae81d1ddafadef91e19c788dfc730108cbb6da7a1f91ab4d6c6b502d6d1" gracePeriod=10 Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.268124 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:00:15 crc kubenswrapper[5017]: W0129 08:00:15.291381 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded6df609_936e_4744_b4e8_d1ad883e850d.slice/crio-9d4da0550dc958490d43e70aff8ba723e72f2bf71ddf1c77de05e6c35e6914e3 WatchSource:0}: Error finding container 9d4da0550dc958490d43e70aff8ba723e72f2bf71ddf1c77de05e6c35e6914e3: Status 404 returned error can't find the container with id 9d4da0550dc958490d43e70aff8ba723e72f2bf71ddf1c77de05e6c35e6914e3 Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.651124 5017 generic.go:334] "Generic (PLEG): container finished" podID="64aefc1a-6f20-40e9-a8f5-767c661de180" containerID="940cbae81d1ddafadef91e19c788dfc730108cbb6da7a1f91ab4d6c6b502d6d1" exitCode=0 Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.651372 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" event={"ID":"64aefc1a-6f20-40e9-a8f5-767c661de180","Type":"ContainerDied","Data":"940cbae81d1ddafadef91e19c788dfc730108cbb6da7a1f91ab4d6c6b502d6d1"} Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.653924 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ed6df609-936e-4744-b4e8-d1ad883e850d","Type":"ContainerStarted","Data":"7250340754c80946d8f64be7bc0fb48f894d72cad3811d7638587b73a9f6cffa"} Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.653977 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ed6df609-936e-4744-b4e8-d1ad883e850d","Type":"ContainerStarted","Data":"9d4da0550dc958490d43e70aff8ba723e72f2bf71ddf1c77de05e6c35e6914e3"} Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.679736 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.746082 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-config\") pod \"64aefc1a-6f20-40e9-a8f5-767c661de180\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.746291 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgskj\" (UniqueName: \"kubernetes.io/projected/64aefc1a-6f20-40e9-a8f5-767c661de180-kube-api-access-mgskj\") pod \"64aefc1a-6f20-40e9-a8f5-767c661de180\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.746482 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-dns-svc\") pod \"64aefc1a-6f20-40e9-a8f5-767c661de180\" (UID: \"64aefc1a-6f20-40e9-a8f5-767c661de180\") " Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.757539 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64aefc1a-6f20-40e9-a8f5-767c661de180-kube-api-access-mgskj" (OuterVolumeSpecName: "kube-api-access-mgskj") pod "64aefc1a-6f20-40e9-a8f5-767c661de180" (UID: "64aefc1a-6f20-40e9-a8f5-767c661de180"). InnerVolumeSpecName "kube-api-access-mgskj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.788493 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-config" (OuterVolumeSpecName: "config") pod "64aefc1a-6f20-40e9-a8f5-767c661de180" (UID: "64aefc1a-6f20-40e9-a8f5-767c661de180"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.790840 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64aefc1a-6f20-40e9-a8f5-767c661de180" (UID: "64aefc1a-6f20-40e9-a8f5-767c661de180"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.851336 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.851878 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgskj\" (UniqueName: \"kubernetes.io/projected/64aefc1a-6f20-40e9-a8f5-767c661de180-kube-api-access-mgskj\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:15 crc kubenswrapper[5017]: I0129 08:00:15.851891 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64aefc1a-6f20-40e9-a8f5-767c661de180-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:16 crc kubenswrapper[5017]: I0129 08:00:16.666170 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ed6df609-936e-4744-b4e8-d1ad883e850d","Type":"ContainerStarted","Data":"62367cbd29b0671059778f0ca078b1cc9f53f7701e21453e1e7ab7ee7271166d"} Jan 29 08:00:16 crc kubenswrapper[5017]: I0129 08:00:16.666280 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 08:00:16 crc kubenswrapper[5017]: I0129 08:00:16.670072 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" event={"ID":"64aefc1a-6f20-40e9-a8f5-767c661de180","Type":"ContainerDied","Data":"3977abcc31659f36858cb980adebe150f60082f433bf2681ecfe4b9ac340584d"} Jan 29 08:00:16 crc kubenswrapper[5017]: I0129 08:00:16.670143 5017 scope.go:117] "RemoveContainer" containerID="940cbae81d1ddafadef91e19c788dfc730108cbb6da7a1f91ab4d6c6b502d6d1" Jan 29 08:00:16 crc kubenswrapper[5017]: I0129 08:00:16.670199 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-9w5ks" Jan 29 08:00:16 crc kubenswrapper[5017]: I0129 08:00:16.694791 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.694758322 podStartE2EDuration="2.694758322s" podCreationTimestamp="2026-01-29 08:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:00:16.691021592 +0000 UTC m=+5103.065469222" watchObservedRunningTime="2026-01-29 08:00:16.694758322 +0000 UTC m=+5103.069205932" Jan 29 08:00:16 crc kubenswrapper[5017]: I0129 08:00:16.699655 5017 scope.go:117] "RemoveContainer" containerID="5eaa8c238ccbe631e068663bfaaeef60d64f1f98489ad3cd3418f60bdb63b7dc" Jan 29 08:00:16 crc kubenswrapper[5017]: I0129 08:00:16.732738 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-9w5ks"] Jan 29 08:00:16 crc kubenswrapper[5017]: I0129 08:00:16.759932 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-9w5ks"] Jan 29 08:00:18 crc kubenswrapper[5017]: I0129 08:00:18.329647 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64aefc1a-6f20-40e9-a8f5-767c661de180" path="/var/lib/kubelet/pods/64aefc1a-6f20-40e9-a8f5-767c661de180/volumes" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.039122 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6ndsf"] Jan 29 08:00:19 crc kubenswrapper[5017]: E0129 08:00:19.039908 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64aefc1a-6f20-40e9-a8f5-767c661de180" containerName="init" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.039931 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="64aefc1a-6f20-40e9-a8f5-767c661de180" containerName="init" Jan 29 08:00:19 crc kubenswrapper[5017]: E0129 08:00:19.039994 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64aefc1a-6f20-40e9-a8f5-767c661de180" containerName="dnsmasq-dns" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.040005 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="64aefc1a-6f20-40e9-a8f5-767c661de180" containerName="dnsmasq-dns" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.040219 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="64aefc1a-6f20-40e9-a8f5-767c661de180" containerName="dnsmasq-dns" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.041000 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.060330 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6ndsf"] Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.112697 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48wf4\" (UniqueName: \"kubernetes.io/projected/a84b8b13-ac28-4baf-aeae-6e977d8b2654-kube-api-access-48wf4\") pod \"keystone-db-create-6ndsf\" (UID: \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\") " pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.112784 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84b8b13-ac28-4baf-aeae-6e977d8b2654-operator-scripts\") pod \"keystone-db-create-6ndsf\" (UID: \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\") " pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.136626 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-87eb-account-create-update-tkbkw"] Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.138199 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.149632 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.153936 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-87eb-account-create-update-tkbkw"] Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.216461 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48wf4\" (UniqueName: \"kubernetes.io/projected/a84b8b13-ac28-4baf-aeae-6e977d8b2654-kube-api-access-48wf4\") pod \"keystone-db-create-6ndsf\" (UID: \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\") " pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.216575 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84b8b13-ac28-4baf-aeae-6e977d8b2654-operator-scripts\") pod \"keystone-db-create-6ndsf\" (UID: \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\") " pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.216725 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scl4v\" (UniqueName: \"kubernetes.io/projected/3c49b28c-c970-454a-b44b-bce67b8315aa-kube-api-access-scl4v\") pod \"keystone-87eb-account-create-update-tkbkw\" (UID: \"3c49b28c-c970-454a-b44b-bce67b8315aa\") " pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.216894 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c49b28c-c970-454a-b44b-bce67b8315aa-operator-scripts\") pod \"keystone-87eb-account-create-update-tkbkw\" (UID: \"3c49b28c-c970-454a-b44b-bce67b8315aa\") " pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.217695 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84b8b13-ac28-4baf-aeae-6e977d8b2654-operator-scripts\") pod \"keystone-db-create-6ndsf\" (UID: \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\") " pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.246373 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48wf4\" (UniqueName: \"kubernetes.io/projected/a84b8b13-ac28-4baf-aeae-6e977d8b2654-kube-api-access-48wf4\") pod \"keystone-db-create-6ndsf\" (UID: \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\") " pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.316622 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:00:19 crc kubenswrapper[5017]: E0129 08:00:19.317017 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.318141 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scl4v\" (UniqueName: \"kubernetes.io/projected/3c49b28c-c970-454a-b44b-bce67b8315aa-kube-api-access-scl4v\") pod \"keystone-87eb-account-create-update-tkbkw\" (UID: \"3c49b28c-c970-454a-b44b-bce67b8315aa\") " pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.318239 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c49b28c-c970-454a-b44b-bce67b8315aa-operator-scripts\") pod \"keystone-87eb-account-create-update-tkbkw\" (UID: \"3c49b28c-c970-454a-b44b-bce67b8315aa\") " pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.319164 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c49b28c-c970-454a-b44b-bce67b8315aa-operator-scripts\") pod \"keystone-87eb-account-create-update-tkbkw\" (UID: \"3c49b28c-c970-454a-b44b-bce67b8315aa\") " pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.336903 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scl4v\" (UniqueName: \"kubernetes.io/projected/3c49b28c-c970-454a-b44b-bce67b8315aa-kube-api-access-scl4v\") pod \"keystone-87eb-account-create-update-tkbkw\" (UID: \"3c49b28c-c970-454a-b44b-bce67b8315aa\") " pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.359997 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.457690 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.822431 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6ndsf"] Jan 29 08:00:19 crc kubenswrapper[5017]: W0129 08:00:19.824922 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda84b8b13_ac28_4baf_aeae_6e977d8b2654.slice/crio-ee925d0d32a05aebac82e4a1b769e6f8c8b85e82d0a36f79dbaf4ac79ad15bcc WatchSource:0}: Error finding container ee925d0d32a05aebac82e4a1b769e6f8c8b85e82d0a36f79dbaf4ac79ad15bcc: Status 404 returned error can't find the container with id ee925d0d32a05aebac82e4a1b769e6f8c8b85e82d0a36f79dbaf4ac79ad15bcc Jan 29 08:00:19 crc kubenswrapper[5017]: I0129 08:00:19.943315 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-87eb-account-create-update-tkbkw"] Jan 29 08:00:19 crc kubenswrapper[5017]: W0129 08:00:19.953301 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c49b28c_c970_454a_b44b_bce67b8315aa.slice/crio-b625f180287416328d0f8d275d04e187657933f00b4af5b96efecaec002a52a4 WatchSource:0}: Error finding container b625f180287416328d0f8d275d04e187657933f00b4af5b96efecaec002a52a4: Status 404 returned error can't find the container with id b625f180287416328d0f8d275d04e187657933f00b4af5b96efecaec002a52a4 Jan 29 08:00:20 crc kubenswrapper[5017]: I0129 08:00:20.708586 5017 generic.go:334] "Generic (PLEG): container finished" podID="a84b8b13-ac28-4baf-aeae-6e977d8b2654" containerID="38d8048df02847d1d4200651c39c86e12a0550f6fd5637f63a9942d76824b6d4" exitCode=0 Jan 29 08:00:20 crc kubenswrapper[5017]: I0129 08:00:20.708698 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6ndsf" event={"ID":"a84b8b13-ac28-4baf-aeae-6e977d8b2654","Type":"ContainerDied","Data":"38d8048df02847d1d4200651c39c86e12a0550f6fd5637f63a9942d76824b6d4"} Jan 29 08:00:20 crc kubenswrapper[5017]: I0129 08:00:20.709185 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6ndsf" event={"ID":"a84b8b13-ac28-4baf-aeae-6e977d8b2654","Type":"ContainerStarted","Data":"ee925d0d32a05aebac82e4a1b769e6f8c8b85e82d0a36f79dbaf4ac79ad15bcc"} Jan 29 08:00:20 crc kubenswrapper[5017]: I0129 08:00:20.712372 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c49b28c-c970-454a-b44b-bce67b8315aa" containerID="210dcdb05f65c0dc4340b1405c5e856656d2944d4ed9fb4c23b0eb7b5f135499" exitCode=0 Jan 29 08:00:20 crc kubenswrapper[5017]: I0129 08:00:20.712424 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-87eb-account-create-update-tkbkw" event={"ID":"3c49b28c-c970-454a-b44b-bce67b8315aa","Type":"ContainerDied","Data":"210dcdb05f65c0dc4340b1405c5e856656d2944d4ed9fb4c23b0eb7b5f135499"} Jan 29 08:00:20 crc kubenswrapper[5017]: I0129 08:00:20.712457 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-87eb-account-create-update-tkbkw" event={"ID":"3c49b28c-c970-454a-b44b-bce67b8315aa","Type":"ContainerStarted","Data":"b625f180287416328d0f8d275d04e187657933f00b4af5b96efecaec002a52a4"} Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.124538 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.128845 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.286922 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scl4v\" (UniqueName: \"kubernetes.io/projected/3c49b28c-c970-454a-b44b-bce67b8315aa-kube-api-access-scl4v\") pod \"3c49b28c-c970-454a-b44b-bce67b8315aa\" (UID: \"3c49b28c-c970-454a-b44b-bce67b8315aa\") " Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.287007 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c49b28c-c970-454a-b44b-bce67b8315aa-operator-scripts\") pod \"3c49b28c-c970-454a-b44b-bce67b8315aa\" (UID: \"3c49b28c-c970-454a-b44b-bce67b8315aa\") " Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.287053 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48wf4\" (UniqueName: \"kubernetes.io/projected/a84b8b13-ac28-4baf-aeae-6e977d8b2654-kube-api-access-48wf4\") pod \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\" (UID: \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\") " Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.287353 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84b8b13-ac28-4baf-aeae-6e977d8b2654-operator-scripts\") pod \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\" (UID: \"a84b8b13-ac28-4baf-aeae-6e977d8b2654\") " Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.288204 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84b8b13-ac28-4baf-aeae-6e977d8b2654-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a84b8b13-ac28-4baf-aeae-6e977d8b2654" (UID: "a84b8b13-ac28-4baf-aeae-6e977d8b2654"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.288347 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c49b28c-c970-454a-b44b-bce67b8315aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c49b28c-c970-454a-b44b-bce67b8315aa" (UID: "3c49b28c-c970-454a-b44b-bce67b8315aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.293897 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84b8b13-ac28-4baf-aeae-6e977d8b2654-kube-api-access-48wf4" (OuterVolumeSpecName: "kube-api-access-48wf4") pod "a84b8b13-ac28-4baf-aeae-6e977d8b2654" (UID: "a84b8b13-ac28-4baf-aeae-6e977d8b2654"). InnerVolumeSpecName "kube-api-access-48wf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.299117 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c49b28c-c970-454a-b44b-bce67b8315aa-kube-api-access-scl4v" (OuterVolumeSpecName: "kube-api-access-scl4v") pod "3c49b28c-c970-454a-b44b-bce67b8315aa" (UID: "3c49b28c-c970-454a-b44b-bce67b8315aa"). InnerVolumeSpecName "kube-api-access-scl4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.390190 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scl4v\" (UniqueName: \"kubernetes.io/projected/3c49b28c-c970-454a-b44b-bce67b8315aa-kube-api-access-scl4v\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.390238 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c49b28c-c970-454a-b44b-bce67b8315aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.390248 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48wf4\" (UniqueName: \"kubernetes.io/projected/a84b8b13-ac28-4baf-aeae-6e977d8b2654-kube-api-access-48wf4\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.390258 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84b8b13-ac28-4baf-aeae-6e977d8b2654-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.732344 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-87eb-account-create-update-tkbkw" event={"ID":"3c49b28c-c970-454a-b44b-bce67b8315aa","Type":"ContainerDied","Data":"b625f180287416328d0f8d275d04e187657933f00b4af5b96efecaec002a52a4"} Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.732796 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b625f180287416328d0f8d275d04e187657933f00b4af5b96efecaec002a52a4" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.732561 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-87eb-account-create-update-tkbkw" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.735154 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6ndsf" event={"ID":"a84b8b13-ac28-4baf-aeae-6e977d8b2654","Type":"ContainerDied","Data":"ee925d0d32a05aebac82e4a1b769e6f8c8b85e82d0a36f79dbaf4ac79ad15bcc"} Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.735297 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee925d0d32a05aebac82e4a1b769e6f8c8b85e82d0a36f79dbaf4ac79ad15bcc" Jan 29 08:00:22 crc kubenswrapper[5017]: I0129 08:00:22.735210 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ndsf" Jan 29 08:00:23 crc kubenswrapper[5017]: I0129 08:00:23.473994 5017 scope.go:117] "RemoveContainer" containerID="4f41fba4d8083eefcd90bd30b5b67c549a19f0364648d70c3316c65291021b72" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.697276 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gdn7c"] Jan 29 08:00:24 crc kubenswrapper[5017]: E0129 08:00:24.700203 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c49b28c-c970-454a-b44b-bce67b8315aa" containerName="mariadb-account-create-update" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.700231 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c49b28c-c970-454a-b44b-bce67b8315aa" containerName="mariadb-account-create-update" Jan 29 08:00:24 crc kubenswrapper[5017]: E0129 08:00:24.700261 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84b8b13-ac28-4baf-aeae-6e977d8b2654" containerName="mariadb-database-create" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.700271 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84b8b13-ac28-4baf-aeae-6e977d8b2654" containerName="mariadb-database-create" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.700491 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c49b28c-c970-454a-b44b-bce67b8315aa" containerName="mariadb-account-create-update" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.700508 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84b8b13-ac28-4baf-aeae-6e977d8b2654" containerName="mariadb-database-create" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.701630 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.709118 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6ptwj" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.709300 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.709427 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.709564 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.719122 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gdn7c"] Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.842710 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7shlj\" (UniqueName: \"kubernetes.io/projected/e6b40d75-4d17-4903-9d35-5f4ccb411b25-kube-api-access-7shlj\") pod \"keystone-db-sync-gdn7c\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.842792 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-combined-ca-bundle\") pod \"keystone-db-sync-gdn7c\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.842815 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-config-data\") pod \"keystone-db-sync-gdn7c\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.946194 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7shlj\" (UniqueName: \"kubernetes.io/projected/e6b40d75-4d17-4903-9d35-5f4ccb411b25-kube-api-access-7shlj\") pod \"keystone-db-sync-gdn7c\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.946273 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-combined-ca-bundle\") pod \"keystone-db-sync-gdn7c\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.946302 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-config-data\") pod \"keystone-db-sync-gdn7c\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.954282 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-config-data\") pod \"keystone-db-sync-gdn7c\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.963059 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-combined-ca-bundle\") pod \"keystone-db-sync-gdn7c\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:24 crc kubenswrapper[5017]: I0129 08:00:24.964618 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7shlj\" (UniqueName: \"kubernetes.io/projected/e6b40d75-4d17-4903-9d35-5f4ccb411b25-kube-api-access-7shlj\") pod \"keystone-db-sync-gdn7c\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:25 crc kubenswrapper[5017]: I0129 08:00:25.035268 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:25 crc kubenswrapper[5017]: I0129 08:00:25.566530 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gdn7c"] Jan 29 08:00:25 crc kubenswrapper[5017]: I0129 08:00:25.761694 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gdn7c" event={"ID":"e6b40d75-4d17-4903-9d35-5f4ccb411b25","Type":"ContainerStarted","Data":"69f46fc963ea4a6f70f0336bf2676a4c11aaef5d461c0a295a7a74ea9aefb83b"} Jan 29 08:00:25 crc kubenswrapper[5017]: I0129 08:00:25.761765 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gdn7c" event={"ID":"e6b40d75-4d17-4903-9d35-5f4ccb411b25","Type":"ContainerStarted","Data":"103968f42745286c5433dcaf6820c345ec85aa4361d261e015534ed16f3853c6"} Jan 29 08:00:25 crc kubenswrapper[5017]: I0129 08:00:25.788606 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gdn7c" podStartSLOduration=1.788568804 podStartE2EDuration="1.788568804s" podCreationTimestamp="2026-01-29 08:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:00:25.783013 +0000 UTC m=+5112.157460630" watchObservedRunningTime="2026-01-29 08:00:25.788568804 +0000 UTC m=+5112.163016424" Jan 29 08:00:27 crc kubenswrapper[5017]: I0129 08:00:27.781143 5017 generic.go:334] "Generic (PLEG): container finished" podID="e6b40d75-4d17-4903-9d35-5f4ccb411b25" containerID="69f46fc963ea4a6f70f0336bf2676a4c11aaef5d461c0a295a7a74ea9aefb83b" exitCode=0 Jan 29 08:00:27 crc kubenswrapper[5017]: I0129 08:00:27.781270 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gdn7c" event={"ID":"e6b40d75-4d17-4903-9d35-5f4ccb411b25","Type":"ContainerDied","Data":"69f46fc963ea4a6f70f0336bf2676a4c11aaef5d461c0a295a7a74ea9aefb83b"} Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.158195 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.282110 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7shlj\" (UniqueName: \"kubernetes.io/projected/e6b40d75-4d17-4903-9d35-5f4ccb411b25-kube-api-access-7shlj\") pod \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.282239 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-combined-ca-bundle\") pod \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.282461 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-config-data\") pod \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\" (UID: \"e6b40d75-4d17-4903-9d35-5f4ccb411b25\") " Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.290349 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b40d75-4d17-4903-9d35-5f4ccb411b25-kube-api-access-7shlj" (OuterVolumeSpecName: "kube-api-access-7shlj") pod "e6b40d75-4d17-4903-9d35-5f4ccb411b25" (UID: "e6b40d75-4d17-4903-9d35-5f4ccb411b25"). InnerVolumeSpecName "kube-api-access-7shlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.304892 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6b40d75-4d17-4903-9d35-5f4ccb411b25" (UID: "e6b40d75-4d17-4903-9d35-5f4ccb411b25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.333256 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-config-data" (OuterVolumeSpecName: "config-data") pod "e6b40d75-4d17-4903-9d35-5f4ccb411b25" (UID: "e6b40d75-4d17-4903-9d35-5f4ccb411b25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.384997 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.385036 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b40d75-4d17-4903-9d35-5f4ccb411b25-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.385050 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7shlj\" (UniqueName: \"kubernetes.io/projected/e6b40d75-4d17-4903-9d35-5f4ccb411b25-kube-api-access-7shlj\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.806011 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gdn7c" event={"ID":"e6b40d75-4d17-4903-9d35-5f4ccb411b25","Type":"ContainerDied","Data":"103968f42745286c5433dcaf6820c345ec85aa4361d261e015534ed16f3853c6"} Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.806464 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="103968f42745286c5433dcaf6820c345ec85aa4361d261e015534ed16f3853c6" Jan 29 08:00:29 crc kubenswrapper[5017]: I0129 08:00:29.806108 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gdn7c" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.089389 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b6cd57555-9kq9j"] Jan 29 08:00:30 crc kubenswrapper[5017]: E0129 08:00:30.090466 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b40d75-4d17-4903-9d35-5f4ccb411b25" containerName="keystone-db-sync" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.090557 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b40d75-4d17-4903-9d35-5f4ccb411b25" containerName="keystone-db-sync" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.090867 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b40d75-4d17-4903-9d35-5f4ccb411b25" containerName="keystone-db-sync" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.098762 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-dns-svc\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.098829 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-sb\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.098860 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr49d\" (UniqueName: \"kubernetes.io/projected/e0bad2ee-5b49-4893-b84a-9f28d470c04b-kube-api-access-jr49d\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.098881 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-config\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.098907 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-nb\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.101425 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.129965 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nvmz6"] Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.131782 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.135300 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.135743 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6ptwj" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.136097 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.136410 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.136697 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.153893 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6cd57555-9kq9j"] Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.168382 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nvmz6"] Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201206 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-config\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201297 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-nb\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201332 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-combined-ca-bundle\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201418 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-config-data\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201447 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-dns-svc\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201470 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6l9q\" (UniqueName: \"kubernetes.io/projected/746411fb-4038-425d-8c43-7e1969344ae3-kube-api-access-h6l9q\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201515 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-sb\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201535 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-scripts\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201550 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-credential-keys\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201584 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-fernet-keys\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.201608 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr49d\" (UniqueName: \"kubernetes.io/projected/e0bad2ee-5b49-4893-b84a-9f28d470c04b-kube-api-access-jr49d\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.204068 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-nb\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.204476 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-dns-svc\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.204600 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-config\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.205086 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-sb\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.222044 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr49d\" (UniqueName: \"kubernetes.io/projected/e0bad2ee-5b49-4893-b84a-9f28d470c04b-kube-api-access-jr49d\") pod \"dnsmasq-dns-b6cd57555-9kq9j\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.302716 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-scripts\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.302775 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-credential-keys\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.302824 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-fernet-keys\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.302872 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-combined-ca-bundle\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.303003 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-config-data\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.303037 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6l9q\" (UniqueName: \"kubernetes.io/projected/746411fb-4038-425d-8c43-7e1969344ae3-kube-api-access-h6l9q\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.309595 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-credential-keys\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.311538 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-combined-ca-bundle\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.313319 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-config-data\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.314634 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-fernet-keys\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.316517 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-scripts\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.329797 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6l9q\" (UniqueName: \"kubernetes.io/projected/746411fb-4038-425d-8c43-7e1969344ae3-kube-api-access-h6l9q\") pod \"keystone-bootstrap-nvmz6\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.421783 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.458337 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.713473 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6cd57555-9kq9j"] Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.814223 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nvmz6"] Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.824703 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" event={"ID":"e0bad2ee-5b49-4893-b84a-9f28d470c04b","Type":"ContainerStarted","Data":"fedf2faf4931e2645133de1d785e8ca0f9050df0358c8dad7026d0cc47f0df89"} Jan 29 08:00:30 crc kubenswrapper[5017]: I0129 08:00:30.828142 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nvmz6" event={"ID":"746411fb-4038-425d-8c43-7e1969344ae3","Type":"ContainerStarted","Data":"7fc22934aa97ad8f37156730edbd5984252c44a4f413f21f765e964bc587e8cf"} Jan 29 08:00:31 crc kubenswrapper[5017]: I0129 08:00:31.843302 5017 generic.go:334] "Generic (PLEG): container finished" podID="e0bad2ee-5b49-4893-b84a-9f28d470c04b" containerID="ebfc084db8d577e4a64ca05ee9252ce4af172e4e289e57da6221c73e23a0ccf9" exitCode=0 Jan 29 08:00:31 crc kubenswrapper[5017]: I0129 08:00:31.845089 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" event={"ID":"e0bad2ee-5b49-4893-b84a-9f28d470c04b","Type":"ContainerDied","Data":"ebfc084db8d577e4a64ca05ee9252ce4af172e4e289e57da6221c73e23a0ccf9"} Jan 29 08:00:31 crc kubenswrapper[5017]: I0129 08:00:31.851208 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nvmz6" event={"ID":"746411fb-4038-425d-8c43-7e1969344ae3","Type":"ContainerStarted","Data":"34dca0050763a2da4967372bd8aba9facd08414b60e496a43378969d02274d69"} Jan 29 08:00:31 crc kubenswrapper[5017]: I0129 08:00:31.892210 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nvmz6" podStartSLOduration=1.892187373 podStartE2EDuration="1.892187373s" podCreationTimestamp="2026-01-29 08:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:00:31.891149279 +0000 UTC m=+5118.265596889" watchObservedRunningTime="2026-01-29 08:00:31.892187373 +0000 UTC m=+5118.266634983" Jan 29 08:00:32 crc kubenswrapper[5017]: I0129 08:00:32.320260 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:00:32 crc kubenswrapper[5017]: E0129 08:00:32.320928 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:00:32 crc kubenswrapper[5017]: I0129 08:00:32.862310 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" event={"ID":"e0bad2ee-5b49-4893-b84a-9f28d470c04b","Type":"ContainerStarted","Data":"886f8f26dced61b8d88f0e14150de18c5bd57ffc5dc572fc523efed9e0dbe2f6"} Jan 29 08:00:32 crc kubenswrapper[5017]: I0129 08:00:32.892501 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" podStartSLOduration=2.892466439 podStartE2EDuration="2.892466439s" podCreationTimestamp="2026-01-29 08:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:00:32.884929267 +0000 UTC m=+5119.259376887" watchObservedRunningTime="2026-01-29 08:00:32.892466439 +0000 UTC m=+5119.266914049" Jan 29 08:00:33 crc kubenswrapper[5017]: I0129 08:00:33.870739 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:34 crc kubenswrapper[5017]: I0129 08:00:34.821679 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 08:00:34 crc kubenswrapper[5017]: I0129 08:00:34.881282 5017 generic.go:334] "Generic (PLEG): container finished" podID="746411fb-4038-425d-8c43-7e1969344ae3" containerID="34dca0050763a2da4967372bd8aba9facd08414b60e496a43378969d02274d69" exitCode=0 Jan 29 08:00:34 crc kubenswrapper[5017]: I0129 08:00:34.882374 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nvmz6" event={"ID":"746411fb-4038-425d-8c43-7e1969344ae3","Type":"ContainerDied","Data":"34dca0050763a2da4967372bd8aba9facd08414b60e496a43378969d02274d69"} Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.278209 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.427677 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6l9q\" (UniqueName: \"kubernetes.io/projected/746411fb-4038-425d-8c43-7e1969344ae3-kube-api-access-h6l9q\") pod \"746411fb-4038-425d-8c43-7e1969344ae3\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.428136 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-credential-keys\") pod \"746411fb-4038-425d-8c43-7e1969344ae3\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.428189 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-fernet-keys\") pod \"746411fb-4038-425d-8c43-7e1969344ae3\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.428276 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-combined-ca-bundle\") pod \"746411fb-4038-425d-8c43-7e1969344ae3\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.428321 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-config-data\") pod \"746411fb-4038-425d-8c43-7e1969344ae3\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.428358 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-scripts\") pod \"746411fb-4038-425d-8c43-7e1969344ae3\" (UID: \"746411fb-4038-425d-8c43-7e1969344ae3\") " Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.434535 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "746411fb-4038-425d-8c43-7e1969344ae3" (UID: "746411fb-4038-425d-8c43-7e1969344ae3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.434946 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-scripts" (OuterVolumeSpecName: "scripts") pod "746411fb-4038-425d-8c43-7e1969344ae3" (UID: "746411fb-4038-425d-8c43-7e1969344ae3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.435166 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "746411fb-4038-425d-8c43-7e1969344ae3" (UID: "746411fb-4038-425d-8c43-7e1969344ae3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.436923 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746411fb-4038-425d-8c43-7e1969344ae3-kube-api-access-h6l9q" (OuterVolumeSpecName: "kube-api-access-h6l9q") pod "746411fb-4038-425d-8c43-7e1969344ae3" (UID: "746411fb-4038-425d-8c43-7e1969344ae3"). InnerVolumeSpecName "kube-api-access-h6l9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.451811 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-config-data" (OuterVolumeSpecName: "config-data") pod "746411fb-4038-425d-8c43-7e1969344ae3" (UID: "746411fb-4038-425d-8c43-7e1969344ae3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.481721 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "746411fb-4038-425d-8c43-7e1969344ae3" (UID: "746411fb-4038-425d-8c43-7e1969344ae3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.531308 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.531345 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.531356 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.531368 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6l9q\" (UniqueName: \"kubernetes.io/projected/746411fb-4038-425d-8c43-7e1969344ae3-kube-api-access-h6l9q\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.531384 5017 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.531396 5017 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/746411fb-4038-425d-8c43-7e1969344ae3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.902301 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nvmz6" event={"ID":"746411fb-4038-425d-8c43-7e1969344ae3","Type":"ContainerDied","Data":"7fc22934aa97ad8f37156730edbd5984252c44a4f413f21f765e964bc587e8cf"} Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.902358 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nvmz6" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.902361 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc22934aa97ad8f37156730edbd5984252c44a4f413f21f765e964bc587e8cf" Jan 29 08:00:36 crc kubenswrapper[5017]: I0129 08:00:36.998809 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nvmz6"] Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.006288 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nvmz6"] Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.090708 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x9j8w"] Jan 29 08:00:37 crc kubenswrapper[5017]: E0129 08:00:37.091276 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746411fb-4038-425d-8c43-7e1969344ae3" containerName="keystone-bootstrap" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.091309 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="746411fb-4038-425d-8c43-7e1969344ae3" containerName="keystone-bootstrap" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.091580 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="746411fb-4038-425d-8c43-7e1969344ae3" containerName="keystone-bootstrap" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.092539 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.099564 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.099579 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6ptwj" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.100698 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.101783 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.102246 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.109988 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x9j8w"] Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.150818 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-scripts\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.150908 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-fernet-keys\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.151169 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fht4\" (UniqueName: \"kubernetes.io/projected/d74e8c79-a281-42a3-b709-3045966eea64-kube-api-access-8fht4\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.151347 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-credential-keys\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.151409 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-combined-ca-bundle\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.151437 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-config-data\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.252911 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-scripts\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.253585 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-fernet-keys\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.253636 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fht4\" (UniqueName: \"kubernetes.io/projected/d74e8c79-a281-42a3-b709-3045966eea64-kube-api-access-8fht4\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.254256 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-credential-keys\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.254324 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-combined-ca-bundle\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.255015 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-config-data\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.256897 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-scripts\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.257952 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-credential-keys\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.259930 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-config-data\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.265519 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-fernet-keys\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.266441 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-combined-ca-bundle\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.276004 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fht4\" (UniqueName: \"kubernetes.io/projected/d74e8c79-a281-42a3-b709-3045966eea64-kube-api-access-8fht4\") pod \"keystone-bootstrap-x9j8w\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.421846 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:37 crc kubenswrapper[5017]: I0129 08:00:37.910274 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x9j8w"] Jan 29 08:00:37 crc kubenswrapper[5017]: W0129 08:00:37.916869 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd74e8c79_a281_42a3_b709_3045966eea64.slice/crio-8df52c35ca2c364211a955897c08fc549907c7976d325f1e625c36ab83f047c7 WatchSource:0}: Error finding container 8df52c35ca2c364211a955897c08fc549907c7976d325f1e625c36ab83f047c7: Status 404 returned error can't find the container with id 8df52c35ca2c364211a955897c08fc549907c7976d325f1e625c36ab83f047c7 Jan 29 08:00:38 crc kubenswrapper[5017]: I0129 08:00:38.338321 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746411fb-4038-425d-8c43-7e1969344ae3" path="/var/lib/kubelet/pods/746411fb-4038-425d-8c43-7e1969344ae3/volumes" Jan 29 08:00:38 crc kubenswrapper[5017]: I0129 08:00:38.920326 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9j8w" event={"ID":"d74e8c79-a281-42a3-b709-3045966eea64","Type":"ContainerStarted","Data":"4194c55d0ded9ac52934efd1495dfab80bf21021761c09a619c693339f745a46"} Jan 29 08:00:38 crc kubenswrapper[5017]: I0129 08:00:38.920380 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9j8w" event={"ID":"d74e8c79-a281-42a3-b709-3045966eea64","Type":"ContainerStarted","Data":"8df52c35ca2c364211a955897c08fc549907c7976d325f1e625c36ab83f047c7"} Jan 29 08:00:38 crc kubenswrapper[5017]: I0129 08:00:38.953200 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x9j8w" podStartSLOduration=1.953173888 podStartE2EDuration="1.953173888s" podCreationTimestamp="2026-01-29 08:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:00:38.948904685 +0000 UTC m=+5125.323352375" watchObservedRunningTime="2026-01-29 08:00:38.953173888 +0000 UTC m=+5125.327621508" Jan 29 08:00:40 crc kubenswrapper[5017]: I0129 08:00:40.424278 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:00:40 crc kubenswrapper[5017]: I0129 08:00:40.496747 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848d7d5c67-28tlc"] Jan 29 08:00:40 crc kubenswrapper[5017]: I0129 08:00:40.497232 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" podUID="0e3a2105-3f64-43fd-986a-7d91a611b845" containerName="dnsmasq-dns" containerID="cri-o://21f901b2adbe83f245cd62b54f0de7c986d54eaa6d4a783ae3f353b966b4159e" gracePeriod=10 Jan 29 08:00:40 crc kubenswrapper[5017]: I0129 08:00:40.947287 5017 generic.go:334] "Generic (PLEG): container finished" podID="0e3a2105-3f64-43fd-986a-7d91a611b845" containerID="21f901b2adbe83f245cd62b54f0de7c986d54eaa6d4a783ae3f353b966b4159e" exitCode=0 Jan 29 08:00:40 crc kubenswrapper[5017]: I0129 08:00:40.947501 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" event={"ID":"0e3a2105-3f64-43fd-986a-7d91a611b845","Type":"ContainerDied","Data":"21f901b2adbe83f245cd62b54f0de7c986d54eaa6d4a783ae3f353b966b4159e"} Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.014302 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.044050 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w44t5\" (UniqueName: \"kubernetes.io/projected/0e3a2105-3f64-43fd-986a-7d91a611b845-kube-api-access-w44t5\") pod \"0e3a2105-3f64-43fd-986a-7d91a611b845\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.044170 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-config\") pod \"0e3a2105-3f64-43fd-986a-7d91a611b845\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.044396 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-dns-svc\") pod \"0e3a2105-3f64-43fd-986a-7d91a611b845\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.044482 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-sb\") pod \"0e3a2105-3f64-43fd-986a-7d91a611b845\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.044606 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-nb\") pod \"0e3a2105-3f64-43fd-986a-7d91a611b845\" (UID: \"0e3a2105-3f64-43fd-986a-7d91a611b845\") " Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.069253 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3a2105-3f64-43fd-986a-7d91a611b845-kube-api-access-w44t5" (OuterVolumeSpecName: "kube-api-access-w44t5") pod "0e3a2105-3f64-43fd-986a-7d91a611b845" (UID: "0e3a2105-3f64-43fd-986a-7d91a611b845"). InnerVolumeSpecName "kube-api-access-w44t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.097778 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e3a2105-3f64-43fd-986a-7d91a611b845" (UID: "0e3a2105-3f64-43fd-986a-7d91a611b845"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.112048 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e3a2105-3f64-43fd-986a-7d91a611b845" (UID: "0e3a2105-3f64-43fd-986a-7d91a611b845"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.117555 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-config" (OuterVolumeSpecName: "config") pod "0e3a2105-3f64-43fd-986a-7d91a611b845" (UID: "0e3a2105-3f64-43fd-986a-7d91a611b845"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.132177 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e3a2105-3f64-43fd-986a-7d91a611b845" (UID: "0e3a2105-3f64-43fd-986a-7d91a611b845"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.148546 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.148605 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w44t5\" (UniqueName: \"kubernetes.io/projected/0e3a2105-3f64-43fd-986a-7d91a611b845-kube-api-access-w44t5\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.148626 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.148641 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.148654 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3a2105-3f64-43fd-986a-7d91a611b845-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.960622 5017 generic.go:334] "Generic (PLEG): container finished" podID="d74e8c79-a281-42a3-b709-3045966eea64" containerID="4194c55d0ded9ac52934efd1495dfab80bf21021761c09a619c693339f745a46" exitCode=0 Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.960732 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9j8w" event={"ID":"d74e8c79-a281-42a3-b709-3045966eea64","Type":"ContainerDied","Data":"4194c55d0ded9ac52934efd1495dfab80bf21021761c09a619c693339f745a46"} Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.964295 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" event={"ID":"0e3a2105-3f64-43fd-986a-7d91a611b845","Type":"ContainerDied","Data":"b58ebe51d2671f9a153b13bfbe45e95c919612fcffa054be0210a77799145d54"} Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.964649 5017 scope.go:117] "RemoveContainer" containerID="21f901b2adbe83f245cd62b54f0de7c986d54eaa6d4a783ae3f353b966b4159e" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.964518 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848d7d5c67-28tlc" Jan 29 08:00:41 crc kubenswrapper[5017]: I0129 08:00:41.997236 5017 scope.go:117] "RemoveContainer" containerID="6b090cb752f37187e942be7b07fe28991e5c0b14eabc92496a9072ffa22421e0" Jan 29 08:00:42 crc kubenswrapper[5017]: I0129 08:00:42.011511 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848d7d5c67-28tlc"] Jan 29 08:00:42 crc kubenswrapper[5017]: I0129 08:00:42.021315 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848d7d5c67-28tlc"] Jan 29 08:00:42 crc kubenswrapper[5017]: I0129 08:00:42.328685 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3a2105-3f64-43fd-986a-7d91a611b845" path="/var/lib/kubelet/pods/0e3a2105-3f64-43fd-986a-7d91a611b845/volumes" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.353467 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.391057 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-credential-keys\") pod \"d74e8c79-a281-42a3-b709-3045966eea64\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.391159 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-config-data\") pod \"d74e8c79-a281-42a3-b709-3045966eea64\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.391182 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-fernet-keys\") pod \"d74e8c79-a281-42a3-b709-3045966eea64\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.391236 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fht4\" (UniqueName: \"kubernetes.io/projected/d74e8c79-a281-42a3-b709-3045966eea64-kube-api-access-8fht4\") pod \"d74e8c79-a281-42a3-b709-3045966eea64\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.391304 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-scripts\") pod \"d74e8c79-a281-42a3-b709-3045966eea64\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.391423 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-combined-ca-bundle\") pod \"d74e8c79-a281-42a3-b709-3045966eea64\" (UID: \"d74e8c79-a281-42a3-b709-3045966eea64\") " Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.399135 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-scripts" (OuterVolumeSpecName: "scripts") pod "d74e8c79-a281-42a3-b709-3045966eea64" (UID: "d74e8c79-a281-42a3-b709-3045966eea64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.399196 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d74e8c79-a281-42a3-b709-3045966eea64" (UID: "d74e8c79-a281-42a3-b709-3045966eea64"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.399289 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d74e8c79-a281-42a3-b709-3045966eea64" (UID: "d74e8c79-a281-42a3-b709-3045966eea64"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.401124 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d74e8c79-a281-42a3-b709-3045966eea64-kube-api-access-8fht4" (OuterVolumeSpecName: "kube-api-access-8fht4") pod "d74e8c79-a281-42a3-b709-3045966eea64" (UID: "d74e8c79-a281-42a3-b709-3045966eea64"). InnerVolumeSpecName "kube-api-access-8fht4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.417216 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-config-data" (OuterVolumeSpecName: "config-data") pod "d74e8c79-a281-42a3-b709-3045966eea64" (UID: "d74e8c79-a281-42a3-b709-3045966eea64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.436922 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d74e8c79-a281-42a3-b709-3045966eea64" (UID: "d74e8c79-a281-42a3-b709-3045966eea64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.494232 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.494281 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.494296 5017 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.494307 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.494318 5017 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d74e8c79-a281-42a3-b709-3045966eea64-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:43 crc kubenswrapper[5017]: I0129 08:00:43.494329 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fht4\" (UniqueName: \"kubernetes.io/projected/d74e8c79-a281-42a3-b709-3045966eea64-kube-api-access-8fht4\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.003114 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x9j8w" event={"ID":"d74e8c79-a281-42a3-b709-3045966eea64","Type":"ContainerDied","Data":"8df52c35ca2c364211a955897c08fc549907c7976d325f1e625c36ab83f047c7"} Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.005207 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8df52c35ca2c364211a955897c08fc549907c7976d325f1e625c36ab83f047c7" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.005337 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x9j8w" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.081615 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c6bdcf98c-m44f5"] Jan 29 08:00:44 crc kubenswrapper[5017]: E0129 08:00:44.082108 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3a2105-3f64-43fd-986a-7d91a611b845" containerName="dnsmasq-dns" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.082134 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3a2105-3f64-43fd-986a-7d91a611b845" containerName="dnsmasq-dns" Jan 29 08:00:44 crc kubenswrapper[5017]: E0129 08:00:44.082148 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3a2105-3f64-43fd-986a-7d91a611b845" containerName="init" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.082155 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3a2105-3f64-43fd-986a-7d91a611b845" containerName="init" Jan 29 08:00:44 crc kubenswrapper[5017]: E0129 08:00:44.082176 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d74e8c79-a281-42a3-b709-3045966eea64" containerName="keystone-bootstrap" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.082183 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d74e8c79-a281-42a3-b709-3045966eea64" containerName="keystone-bootstrap" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.082355 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d74e8c79-a281-42a3-b709-3045966eea64" containerName="keystone-bootstrap" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.082379 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3a2105-3f64-43fd-986a-7d91a611b845" containerName="dnsmasq-dns" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.083036 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.087093 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.087478 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.102013 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.102509 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6ptwj" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.106170 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c6bdcf98c-m44f5"] Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.205151 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmlm5\" (UniqueName: \"kubernetes.io/projected/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-kube-api-access-dmlm5\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.205286 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-combined-ca-bundle\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.205325 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-fernet-keys\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.205376 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-config-data\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.205438 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-scripts\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.205470 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-credential-keys\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.307195 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-combined-ca-bundle\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.307256 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-fernet-keys\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.307328 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-config-data\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.307398 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-scripts\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.307429 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-credential-keys\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.307458 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmlm5\" (UniqueName: \"kubernetes.io/projected/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-kube-api-access-dmlm5\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.313671 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-combined-ca-bundle\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.313720 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-fernet-keys\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.313794 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-config-data\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.314241 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-scripts\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.321831 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:00:44 crc kubenswrapper[5017]: E0129 08:00:44.322247 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.329638 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmlm5\" (UniqueName: \"kubernetes.io/projected/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-kube-api-access-dmlm5\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.330157 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a15bec4c-245f-4fa6-ba0d-5efcaea6aab9-credential-keys\") pod \"keystone-7c6bdcf98c-m44f5\" (UID: \"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9\") " pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:44 crc kubenswrapper[5017]: I0129 08:00:44.444172 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:45 crc kubenswrapper[5017]: I0129 08:00:45.099923 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c6bdcf98c-m44f5"] Jan 29 08:00:46 crc kubenswrapper[5017]: I0129 08:00:46.029123 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c6bdcf98c-m44f5" event={"ID":"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9","Type":"ContainerStarted","Data":"47f2f5dede440bf1a41b4185549f3bbbb8c3e2623070dcddcc985c9a9ff6e6f6"} Jan 29 08:00:46 crc kubenswrapper[5017]: I0129 08:00:46.029612 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:00:46 crc kubenswrapper[5017]: I0129 08:00:46.029634 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c6bdcf98c-m44f5" event={"ID":"a15bec4c-245f-4fa6-ba0d-5efcaea6aab9","Type":"ContainerStarted","Data":"a7d7fe26e48653cd34e2828152f207c658dc8fc1601fefcec8c2c3e521faa8ad"} Jan 29 08:00:46 crc kubenswrapper[5017]: I0129 08:00:46.056468 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c6bdcf98c-m44f5" podStartSLOduration=2.056434768 podStartE2EDuration="2.056434768s" podCreationTimestamp="2026-01-29 08:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:00:46.049812199 +0000 UTC m=+5132.424259809" watchObservedRunningTime="2026-01-29 08:00:46.056434768 +0000 UTC m=+5132.430882388" Jan 29 08:00:59 crc kubenswrapper[5017]: I0129 08:00:59.316813 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:00:59 crc kubenswrapper[5017]: E0129 08:00:59.317673 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.154152 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29494561-v8j5w"] Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.155940 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.166540 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494561-v8j5w"] Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.258549 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-combined-ca-bundle\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.258619 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbzzd\" (UniqueName: \"kubernetes.io/projected/46955347-1e4d-4ae1-97d7-611434a6def3-kube-api-access-lbzzd\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.258856 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-fernet-keys\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.258903 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-config-data\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.361352 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-combined-ca-bundle\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.361447 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbzzd\" (UniqueName: \"kubernetes.io/projected/46955347-1e4d-4ae1-97d7-611434a6def3-kube-api-access-lbzzd\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.361608 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-fernet-keys\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.361644 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-config-data\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.369540 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-combined-ca-bundle\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.369600 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-fernet-keys\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.369667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-config-data\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.379973 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbzzd\" (UniqueName: \"kubernetes.io/projected/46955347-1e4d-4ae1-97d7-611434a6def3-kube-api-access-lbzzd\") pod \"keystone-cron-29494561-v8j5w\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:00 crc kubenswrapper[5017]: I0129 08:01:00.492437 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:01 crc kubenswrapper[5017]: I0129 08:01:01.000322 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494561-v8j5w"] Jan 29 08:01:01 crc kubenswrapper[5017]: I0129 08:01:01.181480 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494561-v8j5w" event={"ID":"46955347-1e4d-4ae1-97d7-611434a6def3","Type":"ContainerStarted","Data":"f0e00e162fd8da11b2034ad550d0bc10d6e7138b562e97a693c7abeec0e4e2fc"} Jan 29 08:01:02 crc kubenswrapper[5017]: I0129 08:01:02.193245 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494561-v8j5w" event={"ID":"46955347-1e4d-4ae1-97d7-611434a6def3","Type":"ContainerStarted","Data":"6873101d79cbe326bfd7c4832764b043ac46f57ee0b70461c22eabfdf034c886"} Jan 29 08:01:02 crc kubenswrapper[5017]: I0129 08:01:02.222838 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29494561-v8j5w" podStartSLOduration=2.222806412 podStartE2EDuration="2.222806412s" podCreationTimestamp="2026-01-29 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:01:02.215385694 +0000 UTC m=+5148.589833314" watchObservedRunningTime="2026-01-29 08:01:02.222806412 +0000 UTC m=+5148.597254032" Jan 29 08:01:04 crc kubenswrapper[5017]: I0129 08:01:04.212125 5017 generic.go:334] "Generic (PLEG): container finished" podID="46955347-1e4d-4ae1-97d7-611434a6def3" containerID="6873101d79cbe326bfd7c4832764b043ac46f57ee0b70461c22eabfdf034c886" exitCode=0 Jan 29 08:01:04 crc kubenswrapper[5017]: I0129 08:01:04.212248 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494561-v8j5w" event={"ID":"46955347-1e4d-4ae1-97d7-611434a6def3","Type":"ContainerDied","Data":"6873101d79cbe326bfd7c4832764b043ac46f57ee0b70461c22eabfdf034c886"} Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.592539 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.675879 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-fernet-keys\") pod \"46955347-1e4d-4ae1-97d7-611434a6def3\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.675953 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbzzd\" (UniqueName: \"kubernetes.io/projected/46955347-1e4d-4ae1-97d7-611434a6def3-kube-api-access-lbzzd\") pod \"46955347-1e4d-4ae1-97d7-611434a6def3\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.676033 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-combined-ca-bundle\") pod \"46955347-1e4d-4ae1-97d7-611434a6def3\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.676150 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-config-data\") pod \"46955347-1e4d-4ae1-97d7-611434a6def3\" (UID: \"46955347-1e4d-4ae1-97d7-611434a6def3\") " Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.685287 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "46955347-1e4d-4ae1-97d7-611434a6def3" (UID: "46955347-1e4d-4ae1-97d7-611434a6def3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.697232 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46955347-1e4d-4ae1-97d7-611434a6def3-kube-api-access-lbzzd" (OuterVolumeSpecName: "kube-api-access-lbzzd") pod "46955347-1e4d-4ae1-97d7-611434a6def3" (UID: "46955347-1e4d-4ae1-97d7-611434a6def3"). InnerVolumeSpecName "kube-api-access-lbzzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.708745 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46955347-1e4d-4ae1-97d7-611434a6def3" (UID: "46955347-1e4d-4ae1-97d7-611434a6def3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.751742 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-config-data" (OuterVolumeSpecName: "config-data") pod "46955347-1e4d-4ae1-97d7-611434a6def3" (UID: "46955347-1e4d-4ae1-97d7-611434a6def3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.778946 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.779026 5017 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.779045 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbzzd\" (UniqueName: \"kubernetes.io/projected/46955347-1e4d-4ae1-97d7-611434a6def3-kube-api-access-lbzzd\") on node \"crc\" DevicePath \"\"" Jan 29 08:01:05 crc kubenswrapper[5017]: I0129 08:01:05.779064 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46955347-1e4d-4ae1-97d7-611434a6def3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:01:06 crc kubenswrapper[5017]: I0129 08:01:06.233235 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494561-v8j5w" event={"ID":"46955347-1e4d-4ae1-97d7-611434a6def3","Type":"ContainerDied","Data":"f0e00e162fd8da11b2034ad550d0bc10d6e7138b562e97a693c7abeec0e4e2fc"} Jan 29 08:01:06 crc kubenswrapper[5017]: I0129 08:01:06.233290 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e00e162fd8da11b2034ad550d0bc10d6e7138b562e97a693c7abeec0e4e2fc" Jan 29 08:01:06 crc kubenswrapper[5017]: I0129 08:01:06.233365 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494561-v8j5w" Jan 29 08:01:13 crc kubenswrapper[5017]: I0129 08:01:13.317050 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:01:13 crc kubenswrapper[5017]: E0129 08:01:13.317809 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:01:16 crc kubenswrapper[5017]: I0129 08:01:16.139551 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7c6bdcf98c-m44f5" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.048494 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 08:01:19 crc kubenswrapper[5017]: E0129 08:01:19.050097 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46955347-1e4d-4ae1-97d7-611434a6def3" containerName="keystone-cron" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.050123 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="46955347-1e4d-4ae1-97d7-611434a6def3" containerName="keystone-cron" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.050413 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="46955347-1e4d-4ae1-97d7-611434a6def3" containerName="keystone-cron" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.051323 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.055947 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.056190 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-s868l" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.056479 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.058300 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.228137 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.228187 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s44dh\" (UniqueName: \"kubernetes.io/projected/953f3dbb-a423-4244-a833-a876051cb0d2-kube-api-access-s44dh\") pod \"openstackclient\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.228209 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config\") pod \"openstackclient\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.330157 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.330484 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s44dh\" (UniqueName: \"kubernetes.io/projected/953f3dbb-a423-4244-a833-a876051cb0d2-kube-api-access-s44dh\") pod \"openstackclient\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.330575 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config\") pod \"openstackclient\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.331630 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config\") pod \"openstackclient\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.349994 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config-secret\") pod \"openstackclient\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.353751 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s44dh\" (UniqueName: \"kubernetes.io/projected/953f3dbb-a423-4244-a833-a876051cb0d2-kube-api-access-s44dh\") pod \"openstackclient\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.383247 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:01:19 crc kubenswrapper[5017]: I0129 08:01:19.750107 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:01:20 crc kubenswrapper[5017]: I0129 08:01:20.369526 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"953f3dbb-a423-4244-a833-a876051cb0d2","Type":"ContainerStarted","Data":"c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908"} Jan 29 08:01:20 crc kubenswrapper[5017]: I0129 08:01:20.370063 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"953f3dbb-a423-4244-a833-a876051cb0d2","Type":"ContainerStarted","Data":"efc833d73679b9bf1caa62fa6d4bf20fd24afa34173e43e96878e12b992bac00"} Jan 29 08:01:20 crc kubenswrapper[5017]: I0129 08:01:20.393221 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.393196871 podStartE2EDuration="1.393196871s" podCreationTimestamp="2026-01-29 08:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:01:20.387155796 +0000 UTC m=+5166.761603406" watchObservedRunningTime="2026-01-29 08:01:20.393196871 +0000 UTC m=+5166.767644481" Jan 29 08:01:27 crc kubenswrapper[5017]: I0129 08:01:27.316382 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:01:27 crc kubenswrapper[5017]: E0129 08:01:27.317133 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:01:38 crc kubenswrapper[5017]: I0129 08:01:38.317041 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:01:38 crc kubenswrapper[5017]: E0129 08:01:38.318412 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:01:52 crc kubenswrapper[5017]: I0129 08:01:52.317067 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:01:52 crc kubenswrapper[5017]: E0129 08:01:52.318163 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:02:04 crc kubenswrapper[5017]: I0129 08:02:04.321898 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:02:04 crc kubenswrapper[5017]: E0129 08:02:04.323124 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:02:19 crc kubenswrapper[5017]: I0129 08:02:19.316710 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:02:19 crc kubenswrapper[5017]: E0129 08:02:19.317898 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:02:31 crc kubenswrapper[5017]: I0129 08:02:31.317096 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:02:31 crc kubenswrapper[5017]: E0129 08:02:31.318211 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:02:43 crc kubenswrapper[5017]: I0129 08:02:43.316405 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:02:43 crc kubenswrapper[5017]: E0129 08:02:43.317278 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:02:54 crc kubenswrapper[5017]: I0129 08:02:54.322041 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:02:54 crc kubenswrapper[5017]: E0129 08:02:54.323055 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:03:03 crc kubenswrapper[5017]: I0129 08:03:03.057160 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sqg8w"] Jan 29 08:03:03 crc kubenswrapper[5017]: I0129 08:03:03.067071 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sqg8w"] Jan 29 08:03:04 crc kubenswrapper[5017]: I0129 08:03:04.325763 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bce803-e003-4ee6-8811-f8c968ed0f71" path="/var/lib/kubelet/pods/87bce803-e003-4ee6-8811-f8c968ed0f71/volumes" Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.748415 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8266-account-create-update-dzrm4"] Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.749973 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.752701 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.755582 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rd2dv"] Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.757019 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.794391 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8266-account-create-update-dzrm4"] Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.802028 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rd2dv"] Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.897247 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh5kf\" (UniqueName: \"kubernetes.io/projected/203ccd24-c1b1-4e3a-8b76-e47f88f21791-kube-api-access-vh5kf\") pod \"barbican-db-create-rd2dv\" (UID: \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\") " pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.897311 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/203ccd24-c1b1-4e3a-8b76-e47f88f21791-operator-scripts\") pod \"barbican-db-create-rd2dv\" (UID: \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\") " pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.897352 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f48f0b82-d5fe-4687-956c-779a52a0bf67-operator-scripts\") pod \"barbican-8266-account-create-update-dzrm4\" (UID: \"f48f0b82-d5fe-4687-956c-779a52a0bf67\") " pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:06 crc kubenswrapper[5017]: I0129 08:03:06.897551 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbcwf\" (UniqueName: \"kubernetes.io/projected/f48f0b82-d5fe-4687-956c-779a52a0bf67-kube-api-access-bbcwf\") pod \"barbican-8266-account-create-update-dzrm4\" (UID: \"f48f0b82-d5fe-4687-956c-779a52a0bf67\") " pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.003163 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh5kf\" (UniqueName: \"kubernetes.io/projected/203ccd24-c1b1-4e3a-8b76-e47f88f21791-kube-api-access-vh5kf\") pod \"barbican-db-create-rd2dv\" (UID: \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\") " pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.003255 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/203ccd24-c1b1-4e3a-8b76-e47f88f21791-operator-scripts\") pod \"barbican-db-create-rd2dv\" (UID: \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\") " pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.003307 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f48f0b82-d5fe-4687-956c-779a52a0bf67-operator-scripts\") pod \"barbican-8266-account-create-update-dzrm4\" (UID: \"f48f0b82-d5fe-4687-956c-779a52a0bf67\") " pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.003363 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbcwf\" (UniqueName: \"kubernetes.io/projected/f48f0b82-d5fe-4687-956c-779a52a0bf67-kube-api-access-bbcwf\") pod \"barbican-8266-account-create-update-dzrm4\" (UID: \"f48f0b82-d5fe-4687-956c-779a52a0bf67\") " pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.004300 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/203ccd24-c1b1-4e3a-8b76-e47f88f21791-operator-scripts\") pod \"barbican-db-create-rd2dv\" (UID: \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\") " pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.004373 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f48f0b82-d5fe-4687-956c-779a52a0bf67-operator-scripts\") pod \"barbican-8266-account-create-update-dzrm4\" (UID: \"f48f0b82-d5fe-4687-956c-779a52a0bf67\") " pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.026268 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh5kf\" (UniqueName: \"kubernetes.io/projected/203ccd24-c1b1-4e3a-8b76-e47f88f21791-kube-api-access-vh5kf\") pod \"barbican-db-create-rd2dv\" (UID: \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\") " pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.026273 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbcwf\" (UniqueName: \"kubernetes.io/projected/f48f0b82-d5fe-4687-956c-779a52a0bf67-kube-api-access-bbcwf\") pod \"barbican-8266-account-create-update-dzrm4\" (UID: \"f48f0b82-d5fe-4687-956c-779a52a0bf67\") " pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.078568 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.095594 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.590328 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rd2dv"] Jan 29 08:03:07 crc kubenswrapper[5017]: I0129 08:03:07.654315 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8266-account-create-update-dzrm4"] Jan 29 08:03:07 crc kubenswrapper[5017]: W0129 08:03:07.677225 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf48f0b82_d5fe_4687_956c_779a52a0bf67.slice/crio-8f29635a560d93684f6987c1508897b75a84812f7490ed7fdaf2bbb7a9611df0 WatchSource:0}: Error finding container 8f29635a560d93684f6987c1508897b75a84812f7490ed7fdaf2bbb7a9611df0: Status 404 returned error can't find the container with id 8f29635a560d93684f6987c1508897b75a84812f7490ed7fdaf2bbb7a9611df0 Jan 29 08:03:08 crc kubenswrapper[5017]: I0129 08:03:08.432001 5017 generic.go:334] "Generic (PLEG): container finished" podID="f48f0b82-d5fe-4687-956c-779a52a0bf67" containerID="c51e66d68d04613b9c87ff35a3adeb0a430f02851c79256577987e405af5d776" exitCode=0 Jan 29 08:03:08 crc kubenswrapper[5017]: I0129 08:03:08.432155 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8266-account-create-update-dzrm4" event={"ID":"f48f0b82-d5fe-4687-956c-779a52a0bf67","Type":"ContainerDied","Data":"c51e66d68d04613b9c87ff35a3adeb0a430f02851c79256577987e405af5d776"} Jan 29 08:03:08 crc kubenswrapper[5017]: I0129 08:03:08.432652 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8266-account-create-update-dzrm4" event={"ID":"f48f0b82-d5fe-4687-956c-779a52a0bf67","Type":"ContainerStarted","Data":"8f29635a560d93684f6987c1508897b75a84812f7490ed7fdaf2bbb7a9611df0"} Jan 29 08:03:08 crc kubenswrapper[5017]: I0129 08:03:08.434696 5017 generic.go:334] "Generic (PLEG): container finished" podID="203ccd24-c1b1-4e3a-8b76-e47f88f21791" containerID="7e9b1392f50de03b869456cb2e23a005dca93670d2e82d9698a8dd0df427434a" exitCode=0 Jan 29 08:03:08 crc kubenswrapper[5017]: I0129 08:03:08.434742 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rd2dv" event={"ID":"203ccd24-c1b1-4e3a-8b76-e47f88f21791","Type":"ContainerDied","Data":"7e9b1392f50de03b869456cb2e23a005dca93670d2e82d9698a8dd0df427434a"} Jan 29 08:03:08 crc kubenswrapper[5017]: I0129 08:03:08.434767 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rd2dv" event={"ID":"203ccd24-c1b1-4e3a-8b76-e47f88f21791","Type":"ContainerStarted","Data":"54d7427ba16b50ec4e619d07e73c2e425bd3cbe7a66ef2a0715619ddb8ee696e"} Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.316668 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:03:09 crc kubenswrapper[5017]: E0129 08:03:09.316942 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.853695 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.861878 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.964684 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh5kf\" (UniqueName: \"kubernetes.io/projected/203ccd24-c1b1-4e3a-8b76-e47f88f21791-kube-api-access-vh5kf\") pod \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\" (UID: \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\") " Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.964788 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/203ccd24-c1b1-4e3a-8b76-e47f88f21791-operator-scripts\") pod \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\" (UID: \"203ccd24-c1b1-4e3a-8b76-e47f88f21791\") " Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.964980 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbcwf\" (UniqueName: \"kubernetes.io/projected/f48f0b82-d5fe-4687-956c-779a52a0bf67-kube-api-access-bbcwf\") pod \"f48f0b82-d5fe-4687-956c-779a52a0bf67\" (UID: \"f48f0b82-d5fe-4687-956c-779a52a0bf67\") " Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.965034 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f48f0b82-d5fe-4687-956c-779a52a0bf67-operator-scripts\") pod \"f48f0b82-d5fe-4687-956c-779a52a0bf67\" (UID: \"f48f0b82-d5fe-4687-956c-779a52a0bf67\") " Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.966114 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/203ccd24-c1b1-4e3a-8b76-e47f88f21791-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "203ccd24-c1b1-4e3a-8b76-e47f88f21791" (UID: "203ccd24-c1b1-4e3a-8b76-e47f88f21791"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.966212 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48f0b82-d5fe-4687-956c-779a52a0bf67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f48f0b82-d5fe-4687-956c-779a52a0bf67" (UID: "f48f0b82-d5fe-4687-956c-779a52a0bf67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.972418 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48f0b82-d5fe-4687-956c-779a52a0bf67-kube-api-access-bbcwf" (OuterVolumeSpecName: "kube-api-access-bbcwf") pod "f48f0b82-d5fe-4687-956c-779a52a0bf67" (UID: "f48f0b82-d5fe-4687-956c-779a52a0bf67"). InnerVolumeSpecName "kube-api-access-bbcwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:09 crc kubenswrapper[5017]: I0129 08:03:09.972496 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/203ccd24-c1b1-4e3a-8b76-e47f88f21791-kube-api-access-vh5kf" (OuterVolumeSpecName: "kube-api-access-vh5kf") pod "203ccd24-c1b1-4e3a-8b76-e47f88f21791" (UID: "203ccd24-c1b1-4e3a-8b76-e47f88f21791"). InnerVolumeSpecName "kube-api-access-vh5kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.068070 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbcwf\" (UniqueName: \"kubernetes.io/projected/f48f0b82-d5fe-4687-956c-779a52a0bf67-kube-api-access-bbcwf\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.068112 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f48f0b82-d5fe-4687-956c-779a52a0bf67-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.068140 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh5kf\" (UniqueName: \"kubernetes.io/projected/203ccd24-c1b1-4e3a-8b76-e47f88f21791-kube-api-access-vh5kf\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.068151 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/203ccd24-c1b1-4e3a-8b76-e47f88f21791-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.453064 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8266-account-create-update-dzrm4" event={"ID":"f48f0b82-d5fe-4687-956c-779a52a0bf67","Type":"ContainerDied","Data":"8f29635a560d93684f6987c1508897b75a84812f7490ed7fdaf2bbb7a9611df0"} Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.453115 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f29635a560d93684f6987c1508897b75a84812f7490ed7fdaf2bbb7a9611df0" Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.453178 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8266-account-create-update-dzrm4" Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.455879 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rd2dv" Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.455860 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rd2dv" event={"ID":"203ccd24-c1b1-4e3a-8b76-e47f88f21791","Type":"ContainerDied","Data":"54d7427ba16b50ec4e619d07e73c2e425bd3cbe7a66ef2a0715619ddb8ee696e"} Jan 29 08:03:10 crc kubenswrapper[5017]: I0129 08:03:10.456014 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d7427ba16b50ec4e619d07e73c2e425bd3cbe7a66ef2a0715619ddb8ee696e" Jan 29 08:03:10 crc kubenswrapper[5017]: E0129 08:03:10.477312 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf48f0b82_d5fe_4687_956c_779a52a0bf67.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod203ccd24_c1b1_4e3a_8b76_e47f88f21791.slice\": RecentStats: unable to find data in memory cache]" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.026475 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-5jcwd"] Jan 29 08:03:12 crc kubenswrapper[5017]: E0129 08:03:12.027167 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48f0b82-d5fe-4687-956c-779a52a0bf67" containerName="mariadb-account-create-update" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.027183 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48f0b82-d5fe-4687-956c-779a52a0bf67" containerName="mariadb-account-create-update" Jan 29 08:03:12 crc kubenswrapper[5017]: E0129 08:03:12.027217 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203ccd24-c1b1-4e3a-8b76-e47f88f21791" containerName="mariadb-database-create" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.027224 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="203ccd24-c1b1-4e3a-8b76-e47f88f21791" containerName="mariadb-database-create" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.027369 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="203ccd24-c1b1-4e3a-8b76-e47f88f21791" containerName="mariadb-database-create" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.027389 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48f0b82-d5fe-4687-956c-779a52a0bf67" containerName="mariadb-account-create-update" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.028021 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.031331 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zpzrc" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.031521 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.044611 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5jcwd"] Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.116007 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-combined-ca-bundle\") pod \"barbican-db-sync-5jcwd\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.116146 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-db-sync-config-data\") pod \"barbican-db-sync-5jcwd\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.116458 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vr8z\" (UniqueName: \"kubernetes.io/projected/1b4ffe21-5562-4339-b707-08b117ecce8f-kube-api-access-6vr8z\") pod \"barbican-db-sync-5jcwd\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.218277 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-combined-ca-bundle\") pod \"barbican-db-sync-5jcwd\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.218360 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-db-sync-config-data\") pod \"barbican-db-sync-5jcwd\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.218436 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vr8z\" (UniqueName: \"kubernetes.io/projected/1b4ffe21-5562-4339-b707-08b117ecce8f-kube-api-access-6vr8z\") pod \"barbican-db-sync-5jcwd\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.226456 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-combined-ca-bundle\") pod \"barbican-db-sync-5jcwd\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.227490 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-db-sync-config-data\") pod \"barbican-db-sync-5jcwd\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.237676 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vr8z\" (UniqueName: \"kubernetes.io/projected/1b4ffe21-5562-4339-b707-08b117ecce8f-kube-api-access-6vr8z\") pod \"barbican-db-sync-5jcwd\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.359871 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:12 crc kubenswrapper[5017]: I0129 08:03:12.981857 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5jcwd"] Jan 29 08:03:13 crc kubenswrapper[5017]: I0129 08:03:13.504608 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5jcwd" event={"ID":"1b4ffe21-5562-4339-b707-08b117ecce8f","Type":"ContainerStarted","Data":"a573821c78ef9509c0618e4fc6bc30da548231b880713b842998e78f7d2db170"} Jan 29 08:03:13 crc kubenswrapper[5017]: I0129 08:03:13.505207 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5jcwd" event={"ID":"1b4ffe21-5562-4339-b707-08b117ecce8f","Type":"ContainerStarted","Data":"1330711cf571697c03aa5e3bf81ded8fd571757d89e5c18affd8ac51563dbd03"} Jan 29 08:03:13 crc kubenswrapper[5017]: I0129 08:03:13.529036 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-5jcwd" podStartSLOduration=1.529004392 podStartE2EDuration="1.529004392s" podCreationTimestamp="2026-01-29 08:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:13.519832662 +0000 UTC m=+5279.894280272" watchObservedRunningTime="2026-01-29 08:03:13.529004392 +0000 UTC m=+5279.903452012" Jan 29 08:03:14 crc kubenswrapper[5017]: I0129 08:03:14.523860 5017 generic.go:334] "Generic (PLEG): container finished" podID="1b4ffe21-5562-4339-b707-08b117ecce8f" containerID="a573821c78ef9509c0618e4fc6bc30da548231b880713b842998e78f7d2db170" exitCode=0 Jan 29 08:03:14 crc kubenswrapper[5017]: I0129 08:03:14.523929 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5jcwd" event={"ID":"1b4ffe21-5562-4339-b707-08b117ecce8f","Type":"ContainerDied","Data":"a573821c78ef9509c0618e4fc6bc30da548231b880713b842998e78f7d2db170"} Jan 29 08:03:15 crc kubenswrapper[5017]: I0129 08:03:15.868626 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:15 crc kubenswrapper[5017]: I0129 08:03:15.996787 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-db-sync-config-data\") pod \"1b4ffe21-5562-4339-b707-08b117ecce8f\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " Jan 29 08:03:15 crc kubenswrapper[5017]: I0129 08:03:15.997014 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-combined-ca-bundle\") pod \"1b4ffe21-5562-4339-b707-08b117ecce8f\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " Jan 29 08:03:15 crc kubenswrapper[5017]: I0129 08:03:15.997054 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vr8z\" (UniqueName: \"kubernetes.io/projected/1b4ffe21-5562-4339-b707-08b117ecce8f-kube-api-access-6vr8z\") pod \"1b4ffe21-5562-4339-b707-08b117ecce8f\" (UID: \"1b4ffe21-5562-4339-b707-08b117ecce8f\") " Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.004150 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b4ffe21-5562-4339-b707-08b117ecce8f-kube-api-access-6vr8z" (OuterVolumeSpecName: "kube-api-access-6vr8z") pod "1b4ffe21-5562-4339-b707-08b117ecce8f" (UID: "1b4ffe21-5562-4339-b707-08b117ecce8f"). InnerVolumeSpecName "kube-api-access-6vr8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.004704 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1b4ffe21-5562-4339-b707-08b117ecce8f" (UID: "1b4ffe21-5562-4339-b707-08b117ecce8f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.022827 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b4ffe21-5562-4339-b707-08b117ecce8f" (UID: "1b4ffe21-5562-4339-b707-08b117ecce8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.100108 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.100178 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vr8z\" (UniqueName: \"kubernetes.io/projected/1b4ffe21-5562-4339-b707-08b117ecce8f-kube-api-access-6vr8z\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.100191 5017 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1b4ffe21-5562-4339-b707-08b117ecce8f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.545446 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5jcwd" event={"ID":"1b4ffe21-5562-4339-b707-08b117ecce8f","Type":"ContainerDied","Data":"1330711cf571697c03aa5e3bf81ded8fd571757d89e5c18affd8ac51563dbd03"} Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.545533 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1330711cf571697c03aa5e3bf81ded8fd571757d89e5c18affd8ac51563dbd03" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.545590 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5jcwd" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.796224 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-754cd5b757-bzlkt"] Jan 29 08:03:16 crc kubenswrapper[5017]: E0129 08:03:16.796779 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4ffe21-5562-4339-b707-08b117ecce8f" containerName="barbican-db-sync" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.796807 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4ffe21-5562-4339-b707-08b117ecce8f" containerName="barbican-db-sync" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.797075 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b4ffe21-5562-4339-b707-08b117ecce8f" containerName="barbican-db-sync" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.798321 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.803849 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zpzrc" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.804281 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.804918 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.816648 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b6df7d4b-jg8rj"] Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.825375 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.835294 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-754cd5b757-bzlkt"] Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.835829 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.864103 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b6df7d4b-jg8rj"] Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920017 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b563542-6a58-4b54-8345-0ddd0ce400ab-config-data-custom\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920115 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-logs\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920171 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdpm\" (UniqueName: \"kubernetes.io/projected/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-kube-api-access-hqdpm\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920201 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b563542-6a58-4b54-8345-0ddd0ce400ab-logs\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920241 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-combined-ca-bundle\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920307 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-config-data-custom\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920336 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b563542-6a58-4b54-8345-0ddd0ce400ab-config-data\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920398 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nqgh\" (UniqueName: \"kubernetes.io/projected/3b563542-6a58-4b54-8345-0ddd0ce400ab-kube-api-access-6nqgh\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920457 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b563542-6a58-4b54-8345-0ddd0ce400ab-combined-ca-bundle\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.920490 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-config-data\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.933656 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dd8bbd5cf-56g7r"] Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.935778 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:16 crc kubenswrapper[5017]: I0129 08:03:16.955346 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dd8bbd5cf-56g7r"] Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.021820 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l95ns\" (UniqueName: \"kubernetes.io/projected/2543d2ea-e2cd-45e5-b6e6-8665b2514591-kube-api-access-l95ns\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.021878 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nqgh\" (UniqueName: \"kubernetes.io/projected/3b563542-6a58-4b54-8345-0ddd0ce400ab-kube-api-access-6nqgh\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.021903 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-config\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.021935 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b563542-6a58-4b54-8345-0ddd0ce400ab-combined-ca-bundle\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.021972 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-config-data\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022004 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-sb\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022023 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-dns-svc\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022061 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-nb\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022090 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b563542-6a58-4b54-8345-0ddd0ce400ab-config-data-custom\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022114 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-logs\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022148 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdpm\" (UniqueName: \"kubernetes.io/projected/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-kube-api-access-hqdpm\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022171 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b563542-6a58-4b54-8345-0ddd0ce400ab-logs\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022195 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-combined-ca-bundle\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022433 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-config-data-custom\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.022526 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b563542-6a58-4b54-8345-0ddd0ce400ab-config-data\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.023917 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-logs\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.024185 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b563542-6a58-4b54-8345-0ddd0ce400ab-logs\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.028149 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b563542-6a58-4b54-8345-0ddd0ce400ab-config-data-custom\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.028562 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b563542-6a58-4b54-8345-0ddd0ce400ab-config-data\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.029594 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-combined-ca-bundle\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.032732 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-config-data-custom\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.033488 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-config-data\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.047008 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b563542-6a58-4b54-8345-0ddd0ce400ab-combined-ca-bundle\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.048355 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nqgh\" (UniqueName: \"kubernetes.io/projected/3b563542-6a58-4b54-8345-0ddd0ce400ab-kube-api-access-6nqgh\") pod \"barbican-worker-754cd5b757-bzlkt\" (UID: \"3b563542-6a58-4b54-8345-0ddd0ce400ab\") " pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.055596 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdpm\" (UniqueName: \"kubernetes.io/projected/50399e4a-ae5c-44e8-a7b5-32201b2be9c7-kube-api-access-hqdpm\") pod \"barbican-keystone-listener-b6df7d4b-jg8rj\" (UID: \"50399e4a-ae5c-44e8-a7b5-32201b2be9c7\") " pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.104556 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-67f5fdbfcb-p7f8f"] Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.106912 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.116206 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.124217 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-67f5fdbfcb-p7f8f"] Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.125444 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l95ns\" (UniqueName: \"kubernetes.io/projected/2543d2ea-e2cd-45e5-b6e6-8665b2514591-kube-api-access-l95ns\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.125530 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-config\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.126044 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-sb\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.126085 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-dns-svc\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.126139 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-nb\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.129042 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-sb\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.129049 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-config\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.130521 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-nb\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.131687 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-dns-svc\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.132039 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-754cd5b757-bzlkt" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.163947 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.177009 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l95ns\" (UniqueName: \"kubernetes.io/projected/2543d2ea-e2cd-45e5-b6e6-8665b2514591-kube-api-access-l95ns\") pod \"dnsmasq-dns-6dd8bbd5cf-56g7r\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.230108 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba72f6-364a-41ba-903b-2378cbacaef5-config-data\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.230207 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bba72f6-364a-41ba-903b-2378cbacaef5-config-data-custom\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.230241 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xkbm\" (UniqueName: \"kubernetes.io/projected/5bba72f6-364a-41ba-903b-2378cbacaef5-kube-api-access-2xkbm\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.230276 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba72f6-364a-41ba-903b-2378cbacaef5-combined-ca-bundle\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.230344 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bba72f6-364a-41ba-903b-2378cbacaef5-logs\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.260688 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.332008 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xkbm\" (UniqueName: \"kubernetes.io/projected/5bba72f6-364a-41ba-903b-2378cbacaef5-kube-api-access-2xkbm\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.332576 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba72f6-364a-41ba-903b-2378cbacaef5-combined-ca-bundle\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.332694 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bba72f6-364a-41ba-903b-2378cbacaef5-logs\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.332757 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba72f6-364a-41ba-903b-2378cbacaef5-config-data\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.332818 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bba72f6-364a-41ba-903b-2378cbacaef5-config-data-custom\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.333618 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bba72f6-364a-41ba-903b-2378cbacaef5-logs\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.341435 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba72f6-364a-41ba-903b-2378cbacaef5-config-data\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.342064 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5bba72f6-364a-41ba-903b-2378cbacaef5-config-data-custom\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.343427 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba72f6-364a-41ba-903b-2378cbacaef5-combined-ca-bundle\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.353348 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xkbm\" (UniqueName: \"kubernetes.io/projected/5bba72f6-364a-41ba-903b-2378cbacaef5-kube-api-access-2xkbm\") pod \"barbican-api-67f5fdbfcb-p7f8f\" (UID: \"5bba72f6-364a-41ba-903b-2378cbacaef5\") " pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.446825 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:17 crc kubenswrapper[5017]: W0129 08:03:17.513694 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b563542_6a58_4b54_8345_0ddd0ce400ab.slice/crio-913f689ece02d24127582dd8aa185c380e1d77a2c74fa1d5a14295b449f5ceb0 WatchSource:0}: Error finding container 913f689ece02d24127582dd8aa185c380e1d77a2c74fa1d5a14295b449f5ceb0: Status 404 returned error can't find the container with id 913f689ece02d24127582dd8aa185c380e1d77a2c74fa1d5a14295b449f5ceb0 Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.534902 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-754cd5b757-bzlkt"] Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.591526 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-754cd5b757-bzlkt" event={"ID":"3b563542-6a58-4b54-8345-0ddd0ce400ab","Type":"ContainerStarted","Data":"913f689ece02d24127582dd8aa185c380e1d77a2c74fa1d5a14295b449f5ceb0"} Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.618286 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b6df7d4b-jg8rj"] Jan 29 08:03:17 crc kubenswrapper[5017]: I0129 08:03:17.902858 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dd8bbd5cf-56g7r"] Jan 29 08:03:17 crc kubenswrapper[5017]: W0129 08:03:17.920416 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2543d2ea_e2cd_45e5_b6e6_8665b2514591.slice/crio-23632c4175b944676999b3731672439e9b7a60903da0fd37b644b122cd1bddda WatchSource:0}: Error finding container 23632c4175b944676999b3731672439e9b7a60903da0fd37b644b122cd1bddda: Status 404 returned error can't find the container with id 23632c4175b944676999b3731672439e9b7a60903da0fd37b644b122cd1bddda Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.033323 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-67f5fdbfcb-p7f8f"] Jan 29 08:03:18 crc kubenswrapper[5017]: W0129 08:03:18.047037 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bba72f6_364a_41ba_903b_2378cbacaef5.slice/crio-2f0e32733008a0a05e23b58dcd1713943c78651f76cfc097e54dc6a39d6eaeda WatchSource:0}: Error finding container 2f0e32733008a0a05e23b58dcd1713943c78651f76cfc097e54dc6a39d6eaeda: Status 404 returned error can't find the container with id 2f0e32733008a0a05e23b58dcd1713943c78651f76cfc097e54dc6a39d6eaeda Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.607160 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" event={"ID":"5bba72f6-364a-41ba-903b-2378cbacaef5","Type":"ContainerStarted","Data":"6c5de0192eb97a7a1b209b47e10c766dfbd76fa2437f04731fe16118cb45fc63"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.608001 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" event={"ID":"5bba72f6-364a-41ba-903b-2378cbacaef5","Type":"ContainerStarted","Data":"1eacc7ebfbbf646e5a1d0bcb91a8e7e20b9387f141462641db75b00dd58b0d85"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.608032 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.608047 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.608074 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" event={"ID":"5bba72f6-364a-41ba-903b-2378cbacaef5","Type":"ContainerStarted","Data":"2f0e32733008a0a05e23b58dcd1713943c78651f76cfc097e54dc6a39d6eaeda"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.610156 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" event={"ID":"50399e4a-ae5c-44e8-a7b5-32201b2be9c7","Type":"ContainerStarted","Data":"42ca726288ca3657702059867164dee2aa79a2490807586a8c1dfc5e8bab65fc"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.610183 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" event={"ID":"50399e4a-ae5c-44e8-a7b5-32201b2be9c7","Type":"ContainerStarted","Data":"376bb61561ef9178e98f0d0123071ab58ae2e9d09fb40888bd70b52e68c267a9"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.610216 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" event={"ID":"50399e4a-ae5c-44e8-a7b5-32201b2be9c7","Type":"ContainerStarted","Data":"7a66ef5e94f904c5abcfb33325016bb2b40341d9192e9e1a1a6409995c2f1622"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.614642 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-754cd5b757-bzlkt" event={"ID":"3b563542-6a58-4b54-8345-0ddd0ce400ab","Type":"ContainerStarted","Data":"081ddb09368172e71955bdd8d28ae9ee8c810a5822e0611352203537ba79b17b"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.614684 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-754cd5b757-bzlkt" event={"ID":"3b563542-6a58-4b54-8345-0ddd0ce400ab","Type":"ContainerStarted","Data":"bf5509e5ea2c2f2f0e79b147418ca916ab5f8f825888b94375471a8237d64657"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.616808 5017 generic.go:334] "Generic (PLEG): container finished" podID="2543d2ea-e2cd-45e5-b6e6-8665b2514591" containerID="6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c" exitCode=0 Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.616864 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" event={"ID":"2543d2ea-e2cd-45e5-b6e6-8665b2514591","Type":"ContainerDied","Data":"6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.616914 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" event={"ID":"2543d2ea-e2cd-45e5-b6e6-8665b2514591","Type":"ContainerStarted","Data":"23632c4175b944676999b3731672439e9b7a60903da0fd37b644b122cd1bddda"} Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.636847 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" podStartSLOduration=1.636823164 podStartE2EDuration="1.636823164s" podCreationTimestamp="2026-01-29 08:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:18.629860297 +0000 UTC m=+5285.004307907" watchObservedRunningTime="2026-01-29 08:03:18.636823164 +0000 UTC m=+5285.011270774" Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.667483 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-754cd5b757-bzlkt" podStartSLOduration=2.6674618 podStartE2EDuration="2.6674618s" podCreationTimestamp="2026-01-29 08:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:18.64831591 +0000 UTC m=+5285.022763510" watchObservedRunningTime="2026-01-29 08:03:18.6674618 +0000 UTC m=+5285.041909410" Jan 29 08:03:18 crc kubenswrapper[5017]: I0129 08:03:18.704378 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b6df7d4b-jg8rj" podStartSLOduration=2.704340515 podStartE2EDuration="2.704340515s" podCreationTimestamp="2026-01-29 08:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:18.670927482 +0000 UTC m=+5285.045375092" watchObservedRunningTime="2026-01-29 08:03:18.704340515 +0000 UTC m=+5285.078788125" Jan 29 08:03:19 crc kubenswrapper[5017]: I0129 08:03:19.640543 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" event={"ID":"2543d2ea-e2cd-45e5-b6e6-8665b2514591","Type":"ContainerStarted","Data":"ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f"} Jan 29 08:03:19 crc kubenswrapper[5017]: I0129 08:03:19.668061 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" podStartSLOduration=3.668040412 podStartE2EDuration="3.668040412s" podCreationTimestamp="2026-01-29 08:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:19.662233622 +0000 UTC m=+5286.036681232" watchObservedRunningTime="2026-01-29 08:03:19.668040412 +0000 UTC m=+5286.042488022" Jan 29 08:03:20 crc kubenswrapper[5017]: I0129 08:03:20.653291 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:22 crc kubenswrapper[5017]: I0129 08:03:22.317169 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:03:22 crc kubenswrapper[5017]: E0129 08:03:22.317671 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:03:23 crc kubenswrapper[5017]: I0129 08:03:23.624167 5017 scope.go:117] "RemoveContainer" containerID="ad450d585b3b08adfdd4b7d64acb2163026fdbf2a02052da64e970fc0524b7fe" Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.263302 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.337604 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6cd57555-9kq9j"] Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.338760 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" podUID="e0bad2ee-5b49-4893-b84a-9f28d470c04b" containerName="dnsmasq-dns" containerID="cri-o://886f8f26dced61b8d88f0e14150de18c5bd57ffc5dc572fc523efed9e0dbe2f6" gracePeriod=10 Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.720982 5017 generic.go:334] "Generic (PLEG): container finished" podID="e0bad2ee-5b49-4893-b84a-9f28d470c04b" containerID="886f8f26dced61b8d88f0e14150de18c5bd57ffc5dc572fc523efed9e0dbe2f6" exitCode=0 Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.721462 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" event={"ID":"e0bad2ee-5b49-4893-b84a-9f28d470c04b","Type":"ContainerDied","Data":"886f8f26dced61b8d88f0e14150de18c5bd57ffc5dc572fc523efed9e0dbe2f6"} Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.838786 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.993187 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr49d\" (UniqueName: \"kubernetes.io/projected/e0bad2ee-5b49-4893-b84a-9f28d470c04b-kube-api-access-jr49d\") pod \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.993285 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-nb\") pod \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.993356 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-sb\") pod \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.993438 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-config\") pod \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " Jan 29 08:03:27 crc kubenswrapper[5017]: I0129 08:03:27.993484 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-dns-svc\") pod \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\" (UID: \"e0bad2ee-5b49-4893-b84a-9f28d470c04b\") " Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.008160 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0bad2ee-5b49-4893-b84a-9f28d470c04b-kube-api-access-jr49d" (OuterVolumeSpecName: "kube-api-access-jr49d") pod "e0bad2ee-5b49-4893-b84a-9f28d470c04b" (UID: "e0bad2ee-5b49-4893-b84a-9f28d470c04b"). InnerVolumeSpecName "kube-api-access-jr49d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.037870 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0bad2ee-5b49-4893-b84a-9f28d470c04b" (UID: "e0bad2ee-5b49-4893-b84a-9f28d470c04b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.048448 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-config" (OuterVolumeSpecName: "config") pod "e0bad2ee-5b49-4893-b84a-9f28d470c04b" (UID: "e0bad2ee-5b49-4893-b84a-9f28d470c04b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.049374 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0bad2ee-5b49-4893-b84a-9f28d470c04b" (UID: "e0bad2ee-5b49-4893-b84a-9f28d470c04b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.069352 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0bad2ee-5b49-4893-b84a-9f28d470c04b" (UID: "e0bad2ee-5b49-4893-b84a-9f28d470c04b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.096329 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.096398 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.096415 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr49d\" (UniqueName: \"kubernetes.io/projected/e0bad2ee-5b49-4893-b84a-9f28d470c04b-kube-api-access-jr49d\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.096433 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.096445 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0bad2ee-5b49-4893-b84a-9f28d470c04b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.733637 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" event={"ID":"e0bad2ee-5b49-4893-b84a-9f28d470c04b","Type":"ContainerDied","Data":"fedf2faf4931e2645133de1d785e8ca0f9050df0358c8dad7026d0cc47f0df89"} Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.734280 5017 scope.go:117] "RemoveContainer" containerID="886f8f26dced61b8d88f0e14150de18c5bd57ffc5dc572fc523efed9e0dbe2f6" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.733700 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6cd57555-9kq9j" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.772461 5017 scope.go:117] "RemoveContainer" containerID="ebfc084db8d577e4a64ca05ee9252ce4af172e4e289e57da6221c73e23a0ccf9" Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.772832 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6cd57555-9kq9j"] Jan 29 08:03:28 crc kubenswrapper[5017]: I0129 08:03:28.780471 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b6cd57555-9kq9j"] Jan 29 08:03:29 crc kubenswrapper[5017]: I0129 08:03:29.111320 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:29 crc kubenswrapper[5017]: I0129 08:03:29.141067 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-67f5fdbfcb-p7f8f" Jan 29 08:03:30 crc kubenswrapper[5017]: I0129 08:03:30.329765 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0bad2ee-5b49-4893-b84a-9f28d470c04b" path="/var/lib/kubelet/pods/e0bad2ee-5b49-4893-b84a-9f28d470c04b/volumes" Jan 29 08:03:35 crc kubenswrapper[5017]: I0129 08:03:35.317408 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:03:35 crc kubenswrapper[5017]: I0129 08:03:35.799511 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"654247425c9be1ad39bf4a420f5d59cd286d394ddb9732f299ef1d927e039684"} Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.670120 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jzhwl"] Jan 29 08:03:41 crc kubenswrapper[5017]: E0129 08:03:41.671368 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bad2ee-5b49-4893-b84a-9f28d470c04b" containerName="dnsmasq-dns" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.671386 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bad2ee-5b49-4893-b84a-9f28d470c04b" containerName="dnsmasq-dns" Jan 29 08:03:41 crc kubenswrapper[5017]: E0129 08:03:41.671398 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bad2ee-5b49-4893-b84a-9f28d470c04b" containerName="init" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.671404 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bad2ee-5b49-4893-b84a-9f28d470c04b" containerName="init" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.671572 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bad2ee-5b49-4893-b84a-9f28d470c04b" containerName="dnsmasq-dns" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.672164 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.689066 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jzhwl"] Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.764045 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8c0b-account-create-update-475f9"] Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.765353 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.768156 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.773881 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8c0b-account-create-update-475f9"] Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.801337 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88436924-55af-421f-8da0-ba80f463a5e7-operator-scripts\") pod \"neutron-db-create-jzhwl\" (UID: \"88436924-55af-421f-8da0-ba80f463a5e7\") " pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.801406 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s76ts\" (UniqueName: \"kubernetes.io/projected/88436924-55af-421f-8da0-ba80f463a5e7-kube-api-access-s76ts\") pod \"neutron-db-create-jzhwl\" (UID: \"88436924-55af-421f-8da0-ba80f463a5e7\") " pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.903689 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54jnd\" (UniqueName: \"kubernetes.io/projected/4e153971-7167-42ca-b001-2cf231b13310-kube-api-access-54jnd\") pod \"neutron-8c0b-account-create-update-475f9\" (UID: \"4e153971-7167-42ca-b001-2cf231b13310\") " pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.903838 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88436924-55af-421f-8da0-ba80f463a5e7-operator-scripts\") pod \"neutron-db-create-jzhwl\" (UID: \"88436924-55af-421f-8da0-ba80f463a5e7\") " pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.903874 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s76ts\" (UniqueName: \"kubernetes.io/projected/88436924-55af-421f-8da0-ba80f463a5e7-kube-api-access-s76ts\") pod \"neutron-db-create-jzhwl\" (UID: \"88436924-55af-421f-8da0-ba80f463a5e7\") " pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.903901 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e153971-7167-42ca-b001-2cf231b13310-operator-scripts\") pod \"neutron-8c0b-account-create-update-475f9\" (UID: \"4e153971-7167-42ca-b001-2cf231b13310\") " pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.904746 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88436924-55af-421f-8da0-ba80f463a5e7-operator-scripts\") pod \"neutron-db-create-jzhwl\" (UID: \"88436924-55af-421f-8da0-ba80f463a5e7\") " pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.929066 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s76ts\" (UniqueName: \"kubernetes.io/projected/88436924-55af-421f-8da0-ba80f463a5e7-kube-api-access-s76ts\") pod \"neutron-db-create-jzhwl\" (UID: \"88436924-55af-421f-8da0-ba80f463a5e7\") " pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:41 crc kubenswrapper[5017]: I0129 08:03:41.996567 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.005733 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54jnd\" (UniqueName: \"kubernetes.io/projected/4e153971-7167-42ca-b001-2cf231b13310-kube-api-access-54jnd\") pod \"neutron-8c0b-account-create-update-475f9\" (UID: \"4e153971-7167-42ca-b001-2cf231b13310\") " pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.005797 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e153971-7167-42ca-b001-2cf231b13310-operator-scripts\") pod \"neutron-8c0b-account-create-update-475f9\" (UID: \"4e153971-7167-42ca-b001-2cf231b13310\") " pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.006577 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e153971-7167-42ca-b001-2cf231b13310-operator-scripts\") pod \"neutron-8c0b-account-create-update-475f9\" (UID: \"4e153971-7167-42ca-b001-2cf231b13310\") " pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.025610 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54jnd\" (UniqueName: \"kubernetes.io/projected/4e153971-7167-42ca-b001-2cf231b13310-kube-api-access-54jnd\") pod \"neutron-8c0b-account-create-update-475f9\" (UID: \"4e153971-7167-42ca-b001-2cf231b13310\") " pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.092406 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.601502 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jzhwl"] Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.617590 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8c0b-account-create-update-475f9"] Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.880347 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzhwl" event={"ID":"88436924-55af-421f-8da0-ba80f463a5e7","Type":"ContainerStarted","Data":"c041202c0a44d442a6d91b733dda5b71b6fcb3cbbd02753674830b13dc6ecc70"} Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.880783 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzhwl" event={"ID":"88436924-55af-421f-8da0-ba80f463a5e7","Type":"ContainerStarted","Data":"77a2cbb34f6ad2182728a1c9a91cd7dee32567df6762b928f0676176e1d85ce9"} Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.892162 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c0b-account-create-update-475f9" event={"ID":"4e153971-7167-42ca-b001-2cf231b13310","Type":"ContainerStarted","Data":"4e51714d7510f6d84cfd2d03f866c97c8b20ddfed7b753c95a9f8602689949e7"} Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.892251 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c0b-account-create-update-475f9" event={"ID":"4e153971-7167-42ca-b001-2cf231b13310","Type":"ContainerStarted","Data":"8d8b005a0b1ef4fcfaa0dbf2bae22a486a050e3140f94148f66fd814813b2781"} Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.904637 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-jzhwl" podStartSLOduration=1.90459454 podStartE2EDuration="1.90459454s" podCreationTimestamp="2026-01-29 08:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:42.900482091 +0000 UTC m=+5309.274929711" watchObservedRunningTime="2026-01-29 08:03:42.90459454 +0000 UTC m=+5309.279042150" Jan 29 08:03:42 crc kubenswrapper[5017]: I0129 08:03:42.925722 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8c0b-account-create-update-475f9" podStartSLOduration=1.925686536 podStartE2EDuration="1.925686536s" podCreationTimestamp="2026-01-29 08:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:42.922571622 +0000 UTC m=+5309.297019232" watchObservedRunningTime="2026-01-29 08:03:42.925686536 +0000 UTC m=+5309.300134146" Jan 29 08:03:43 crc kubenswrapper[5017]: I0129 08:03:43.903939 5017 generic.go:334] "Generic (PLEG): container finished" podID="88436924-55af-421f-8da0-ba80f463a5e7" containerID="c041202c0a44d442a6d91b733dda5b71b6fcb3cbbd02753674830b13dc6ecc70" exitCode=0 Jan 29 08:03:43 crc kubenswrapper[5017]: I0129 08:03:43.904066 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzhwl" event={"ID":"88436924-55af-421f-8da0-ba80f463a5e7","Type":"ContainerDied","Data":"c041202c0a44d442a6d91b733dda5b71b6fcb3cbbd02753674830b13dc6ecc70"} Jan 29 08:03:43 crc kubenswrapper[5017]: I0129 08:03:43.907432 5017 generic.go:334] "Generic (PLEG): container finished" podID="4e153971-7167-42ca-b001-2cf231b13310" containerID="4e51714d7510f6d84cfd2d03f866c97c8b20ddfed7b753c95a9f8602689949e7" exitCode=0 Jan 29 08:03:43 crc kubenswrapper[5017]: I0129 08:03:43.907467 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c0b-account-create-update-475f9" event={"ID":"4e153971-7167-42ca-b001-2cf231b13310","Type":"ContainerDied","Data":"4e51714d7510f6d84cfd2d03f866c97c8b20ddfed7b753c95a9f8602689949e7"} Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.389441 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.397993 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.474778 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54jnd\" (UniqueName: \"kubernetes.io/projected/4e153971-7167-42ca-b001-2cf231b13310-kube-api-access-54jnd\") pod \"4e153971-7167-42ca-b001-2cf231b13310\" (UID: \"4e153971-7167-42ca-b001-2cf231b13310\") " Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.474924 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e153971-7167-42ca-b001-2cf231b13310-operator-scripts\") pod \"4e153971-7167-42ca-b001-2cf231b13310\" (UID: \"4e153971-7167-42ca-b001-2cf231b13310\") " Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.476679 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e153971-7167-42ca-b001-2cf231b13310-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e153971-7167-42ca-b001-2cf231b13310" (UID: "4e153971-7167-42ca-b001-2cf231b13310"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.482210 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e153971-7167-42ca-b001-2cf231b13310-kube-api-access-54jnd" (OuterVolumeSpecName: "kube-api-access-54jnd") pod "4e153971-7167-42ca-b001-2cf231b13310" (UID: "4e153971-7167-42ca-b001-2cf231b13310"). InnerVolumeSpecName "kube-api-access-54jnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.577637 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s76ts\" (UniqueName: \"kubernetes.io/projected/88436924-55af-421f-8da0-ba80f463a5e7-kube-api-access-s76ts\") pod \"88436924-55af-421f-8da0-ba80f463a5e7\" (UID: \"88436924-55af-421f-8da0-ba80f463a5e7\") " Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.577708 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88436924-55af-421f-8da0-ba80f463a5e7-operator-scripts\") pod \"88436924-55af-421f-8da0-ba80f463a5e7\" (UID: \"88436924-55af-421f-8da0-ba80f463a5e7\") " Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.578822 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88436924-55af-421f-8da0-ba80f463a5e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88436924-55af-421f-8da0-ba80f463a5e7" (UID: "88436924-55af-421f-8da0-ba80f463a5e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.579173 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54jnd\" (UniqueName: \"kubernetes.io/projected/4e153971-7167-42ca-b001-2cf231b13310-kube-api-access-54jnd\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.579196 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e153971-7167-42ca-b001-2cf231b13310-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.582415 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88436924-55af-421f-8da0-ba80f463a5e7-kube-api-access-s76ts" (OuterVolumeSpecName: "kube-api-access-s76ts") pod "88436924-55af-421f-8da0-ba80f463a5e7" (UID: "88436924-55af-421f-8da0-ba80f463a5e7"). InnerVolumeSpecName "kube-api-access-s76ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.681788 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88436924-55af-421f-8da0-ba80f463a5e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.682237 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s76ts\" (UniqueName: \"kubernetes.io/projected/88436924-55af-421f-8da0-ba80f463a5e7-kube-api-access-s76ts\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.929404 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c0b-account-create-update-475f9" event={"ID":"4e153971-7167-42ca-b001-2cf231b13310","Type":"ContainerDied","Data":"8d8b005a0b1ef4fcfaa0dbf2bae22a486a050e3140f94148f66fd814813b2781"} Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.929461 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d8b005a0b1ef4fcfaa0dbf2bae22a486a050e3140f94148f66fd814813b2781" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.929530 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c0b-account-create-update-475f9" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.932111 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jzhwl" event={"ID":"88436924-55af-421f-8da0-ba80f463a5e7","Type":"ContainerDied","Data":"77a2cbb34f6ad2182728a1c9a91cd7dee32567df6762b928f0676176e1d85ce9"} Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.932189 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a2cbb34f6ad2182728a1c9a91cd7dee32567df6762b928f0676176e1d85ce9" Jan 29 08:03:45 crc kubenswrapper[5017]: I0129 08:03:45.932221 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jzhwl" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.044135 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qwfkb"] Jan 29 08:03:47 crc kubenswrapper[5017]: E0129 08:03:47.044728 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e153971-7167-42ca-b001-2cf231b13310" containerName="mariadb-account-create-update" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.044746 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e153971-7167-42ca-b001-2cf231b13310" containerName="mariadb-account-create-update" Jan 29 08:03:47 crc kubenswrapper[5017]: E0129 08:03:47.044763 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88436924-55af-421f-8da0-ba80f463a5e7" containerName="mariadb-database-create" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.044769 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="88436924-55af-421f-8da0-ba80f463a5e7" containerName="mariadb-database-create" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.045694 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e153971-7167-42ca-b001-2cf231b13310" containerName="mariadb-account-create-update" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.045736 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="88436924-55af-421f-8da0-ba80f463a5e7" containerName="mariadb-database-create" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.046856 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.049265 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.050361 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cmtx5" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.055555 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.059631 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qwfkb"] Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.215543 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-combined-ca-bundle\") pod \"neutron-db-sync-qwfkb\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.215635 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-config\") pod \"neutron-db-sync-qwfkb\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.215675 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78k6r\" (UniqueName: \"kubernetes.io/projected/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-kube-api-access-78k6r\") pod \"neutron-db-sync-qwfkb\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.317357 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-config\") pod \"neutron-db-sync-qwfkb\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.317405 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78k6r\" (UniqueName: \"kubernetes.io/projected/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-kube-api-access-78k6r\") pod \"neutron-db-sync-qwfkb\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.317518 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-combined-ca-bundle\") pod \"neutron-db-sync-qwfkb\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.324008 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-config\") pod \"neutron-db-sync-qwfkb\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.331846 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-combined-ca-bundle\") pod \"neutron-db-sync-qwfkb\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.334535 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78k6r\" (UniqueName: \"kubernetes.io/projected/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-kube-api-access-78k6r\") pod \"neutron-db-sync-qwfkb\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.379191 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.840888 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qwfkb"] Jan 29 08:03:47 crc kubenswrapper[5017]: I0129 08:03:47.951214 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qwfkb" event={"ID":"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4","Type":"ContainerStarted","Data":"fce7a542f51619b6260fa113686dc0aee093accbc3bf13133cf0fe3e7a8b1fbe"} Jan 29 08:03:48 crc kubenswrapper[5017]: I0129 08:03:48.964112 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qwfkb" event={"ID":"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4","Type":"ContainerStarted","Data":"a09c25f576a8313a628ebe6bdd676faab91b69281b091517a762bc6ad8c45c0b"} Jan 29 08:03:48 crc kubenswrapper[5017]: I0129 08:03:48.985301 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qwfkb" podStartSLOduration=1.985273099 podStartE2EDuration="1.985273099s" podCreationTimestamp="2026-01-29 08:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:48.979794007 +0000 UTC m=+5315.354241627" watchObservedRunningTime="2026-01-29 08:03:48.985273099 +0000 UTC m=+5315.359720709" Jan 29 08:03:53 crc kubenswrapper[5017]: I0129 08:03:52.999933 5017 generic.go:334] "Generic (PLEG): container finished" podID="075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4" containerID="a09c25f576a8313a628ebe6bdd676faab91b69281b091517a762bc6ad8c45c0b" exitCode=0 Jan 29 08:03:53 crc kubenswrapper[5017]: I0129 08:03:53.000193 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qwfkb" event={"ID":"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4","Type":"ContainerDied","Data":"a09c25f576a8313a628ebe6bdd676faab91b69281b091517a762bc6ad8c45c0b"} Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.549912 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.665353 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-config\") pod \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.665460 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-combined-ca-bundle\") pod \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.665605 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78k6r\" (UniqueName: \"kubernetes.io/projected/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-kube-api-access-78k6r\") pod \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\" (UID: \"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4\") " Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.673631 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-kube-api-access-78k6r" (OuterVolumeSpecName: "kube-api-access-78k6r") pod "075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4" (UID: "075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4"). InnerVolumeSpecName "kube-api-access-78k6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.694178 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4" (UID: "075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.698665 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-config" (OuterVolumeSpecName: "config") pod "075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4" (UID: "075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.767444 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78k6r\" (UniqueName: \"kubernetes.io/projected/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-kube-api-access-78k6r\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.767487 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:54 crc kubenswrapper[5017]: I0129 08:03:54.767498 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.023708 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qwfkb" event={"ID":"075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4","Type":"ContainerDied","Data":"fce7a542f51619b6260fa113686dc0aee093accbc3bf13133cf0fe3e7a8b1fbe"} Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.023753 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qwfkb" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.023763 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce7a542f51619b6260fa113686dc0aee093accbc3bf13133cf0fe3e7a8b1fbe" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.214812 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d455fc967-h89rn"] Jan 29 08:03:55 crc kubenswrapper[5017]: E0129 08:03:55.215644 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4" containerName="neutron-db-sync" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.215749 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4" containerName="neutron-db-sync" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.216087 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4" containerName="neutron-db-sync" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.219936 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.258053 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d455fc967-h89rn"] Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.276291 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nlhx\" (UniqueName: \"kubernetes.io/projected/99c11145-e568-4c3a-993a-289881689134-kube-api-access-5nlhx\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.276646 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-dns-svc\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.276786 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-nb\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.276947 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-config\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.277061 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-sb\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.378844 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-nb\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.380190 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-nb\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.380980 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-config\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.381130 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-sb\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.381260 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nlhx\" (UniqueName: \"kubernetes.io/projected/99c11145-e568-4c3a-993a-289881689134-kube-api-access-5nlhx\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.381355 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-dns-svc\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.382847 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-dns-svc\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.383089 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-config\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.383126 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-sb\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.411983 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nlhx\" (UniqueName: \"kubernetes.io/projected/99c11145-e568-4c3a-993a-289881689134-kube-api-access-5nlhx\") pod \"dnsmasq-dns-5d455fc967-h89rn\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.547606 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.555943 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f4578d465-ntwp5"] Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.558514 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.561138 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cmtx5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.561495 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.561706 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.573135 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f4578d465-ntwp5"] Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.687134 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/932d89c9-7469-4386-a5eb-f35774719f27-config\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.687197 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/932d89c9-7469-4386-a5eb-f35774719f27-httpd-config\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.687239 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwmk6\" (UniqueName: \"kubernetes.io/projected/932d89c9-7469-4386-a5eb-f35774719f27-kube-api-access-gwmk6\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.687302 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/932d89c9-7469-4386-a5eb-f35774719f27-combined-ca-bundle\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.789447 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/932d89c9-7469-4386-a5eb-f35774719f27-combined-ca-bundle\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.789646 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/932d89c9-7469-4386-a5eb-f35774719f27-config\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.789674 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/932d89c9-7469-4386-a5eb-f35774719f27-httpd-config\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.789730 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwmk6\" (UniqueName: \"kubernetes.io/projected/932d89c9-7469-4386-a5eb-f35774719f27-kube-api-access-gwmk6\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.799504 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/932d89c9-7469-4386-a5eb-f35774719f27-config\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.799993 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/932d89c9-7469-4386-a5eb-f35774719f27-combined-ca-bundle\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.801083 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/932d89c9-7469-4386-a5eb-f35774719f27-httpd-config\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.810816 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwmk6\" (UniqueName: \"kubernetes.io/projected/932d89c9-7469-4386-a5eb-f35774719f27-kube-api-access-gwmk6\") pod \"neutron-7f4578d465-ntwp5\" (UID: \"932d89c9-7469-4386-a5eb-f35774719f27\") " pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:55 crc kubenswrapper[5017]: I0129 08:03:55.967315 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:56 crc kubenswrapper[5017]: I0129 08:03:56.105931 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d455fc967-h89rn"] Jan 29 08:03:56 crc kubenswrapper[5017]: W0129 08:03:56.119203 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99c11145_e568_4c3a_993a_289881689134.slice/crio-b3e1d9924f1f925d6159719df61fd3af2101c7a2d8d35f8bef9ae6fc86161a94 WatchSource:0}: Error finding container b3e1d9924f1f925d6159719df61fd3af2101c7a2d8d35f8bef9ae6fc86161a94: Status 404 returned error can't find the container with id b3e1d9924f1f925d6159719df61fd3af2101c7a2d8d35f8bef9ae6fc86161a94 Jan 29 08:03:56 crc kubenswrapper[5017]: I0129 08:03:56.614923 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f4578d465-ntwp5"] Jan 29 08:03:57 crc kubenswrapper[5017]: I0129 08:03:57.049379 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f4578d465-ntwp5" event={"ID":"932d89c9-7469-4386-a5eb-f35774719f27","Type":"ContainerStarted","Data":"abd95bb8c3b48cf6ed80a715633314468da9f8da03f32b25a3e0c36983608ed0"} Jan 29 08:03:57 crc kubenswrapper[5017]: I0129 08:03:57.050083 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f4578d465-ntwp5" event={"ID":"932d89c9-7469-4386-a5eb-f35774719f27","Type":"ContainerStarted","Data":"33c611d68c9668eb03a51e12b3eb8713f1d71ba1729aff3e219b51f66fbb174a"} Jan 29 08:03:57 crc kubenswrapper[5017]: I0129 08:03:57.050161 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f4578d465-ntwp5" event={"ID":"932d89c9-7469-4386-a5eb-f35774719f27","Type":"ContainerStarted","Data":"dc0bac297ca30185f84a23d26d0397fdaff066b7117149804bbaf304cf1a3c3f"} Jan 29 08:03:57 crc kubenswrapper[5017]: I0129 08:03:57.050248 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:03:57 crc kubenswrapper[5017]: I0129 08:03:57.053875 5017 generic.go:334] "Generic (PLEG): container finished" podID="99c11145-e568-4c3a-993a-289881689134" containerID="d123d868059f6c36e00acba0fe36172b1c25f5b97632efacf49c97835c71532c" exitCode=0 Jan 29 08:03:57 crc kubenswrapper[5017]: I0129 08:03:57.053998 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" event={"ID":"99c11145-e568-4c3a-993a-289881689134","Type":"ContainerDied","Data":"d123d868059f6c36e00acba0fe36172b1c25f5b97632efacf49c97835c71532c"} Jan 29 08:03:57 crc kubenswrapper[5017]: I0129 08:03:57.056629 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" event={"ID":"99c11145-e568-4c3a-993a-289881689134","Type":"ContainerStarted","Data":"b3e1d9924f1f925d6159719df61fd3af2101c7a2d8d35f8bef9ae6fc86161a94"} Jan 29 08:03:57 crc kubenswrapper[5017]: I0129 08:03:57.085248 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f4578d465-ntwp5" podStartSLOduration=2.085221378 podStartE2EDuration="2.085221378s" podCreationTimestamp="2026-01-29 08:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:57.080064974 +0000 UTC m=+5323.454512604" watchObservedRunningTime="2026-01-29 08:03:57.085221378 +0000 UTC m=+5323.459668988" Jan 29 08:03:58 crc kubenswrapper[5017]: I0129 08:03:58.063648 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" event={"ID":"99c11145-e568-4c3a-993a-289881689134","Type":"ContainerStarted","Data":"e507ad840a1b67975c72dac2667bb13db3c31c087d34bf3edce322743c185169"} Jan 29 08:03:58 crc kubenswrapper[5017]: I0129 08:03:58.088533 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" podStartSLOduration=3.088510136 podStartE2EDuration="3.088510136s" podCreationTimestamp="2026-01-29 08:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:58.082667535 +0000 UTC m=+5324.457115145" watchObservedRunningTime="2026-01-29 08:03:58.088510136 +0000 UTC m=+5324.462957746" Jan 29 08:03:59 crc kubenswrapper[5017]: I0129 08:03:59.072472 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:04:05 crc kubenswrapper[5017]: I0129 08:04:05.550002 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:04:05 crc kubenswrapper[5017]: I0129 08:04:05.610504 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dd8bbd5cf-56g7r"] Jan 29 08:04:05 crc kubenswrapper[5017]: I0129 08:04:05.610839 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" podUID="2543d2ea-e2cd-45e5-b6e6-8665b2514591" containerName="dnsmasq-dns" containerID="cri-o://ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f" gracePeriod=10 Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.147764 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.158168 5017 generic.go:334] "Generic (PLEG): container finished" podID="2543d2ea-e2cd-45e5-b6e6-8665b2514591" containerID="ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f" exitCode=0 Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.158230 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" event={"ID":"2543d2ea-e2cd-45e5-b6e6-8665b2514591","Type":"ContainerDied","Data":"ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f"} Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.158267 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" event={"ID":"2543d2ea-e2cd-45e5-b6e6-8665b2514591","Type":"ContainerDied","Data":"23632c4175b944676999b3731672439e9b7a60903da0fd37b644b122cd1bddda"} Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.158290 5017 scope.go:117] "RemoveContainer" containerID="ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.158401 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dd8bbd5cf-56g7r" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.202580 5017 scope.go:117] "RemoveContainer" containerID="6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.230989 5017 scope.go:117] "RemoveContainer" containerID="ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f" Jan 29 08:04:06 crc kubenswrapper[5017]: E0129 08:04:06.231915 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f\": container with ID starting with ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f not found: ID does not exist" containerID="ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.231994 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f"} err="failed to get container status \"ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f\": rpc error: code = NotFound desc = could not find container \"ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f\": container with ID starting with ed88b812b63015d148d1e06a4c59d882f57dd25c63a828816b2ef37fdf0cc65f not found: ID does not exist" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.232027 5017 scope.go:117] "RemoveContainer" containerID="6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c" Jan 29 08:04:06 crc kubenswrapper[5017]: E0129 08:04:06.232478 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c\": container with ID starting with 6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c not found: ID does not exist" containerID="6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.232527 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c"} err="failed to get container status \"6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c\": rpc error: code = NotFound desc = could not find container \"6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c\": container with ID starting with 6cf3272580e30b81f7e84b1e399d58d6dddb8f8f2038b87c7cceabee7fc47b3c not found: ID does not exist" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.314228 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-dns-svc\") pod \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.314378 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-nb\") pod \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.314402 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-sb\") pod \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.314451 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-config\") pod \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.314473 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l95ns\" (UniqueName: \"kubernetes.io/projected/2543d2ea-e2cd-45e5-b6e6-8665b2514591-kube-api-access-l95ns\") pod \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\" (UID: \"2543d2ea-e2cd-45e5-b6e6-8665b2514591\") " Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.325680 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2543d2ea-e2cd-45e5-b6e6-8665b2514591-kube-api-access-l95ns" (OuterVolumeSpecName: "kube-api-access-l95ns") pod "2543d2ea-e2cd-45e5-b6e6-8665b2514591" (UID: "2543d2ea-e2cd-45e5-b6e6-8665b2514591"). InnerVolumeSpecName "kube-api-access-l95ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.362283 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-config" (OuterVolumeSpecName: "config") pod "2543d2ea-e2cd-45e5-b6e6-8665b2514591" (UID: "2543d2ea-e2cd-45e5-b6e6-8665b2514591"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.367879 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2543d2ea-e2cd-45e5-b6e6-8665b2514591" (UID: "2543d2ea-e2cd-45e5-b6e6-8665b2514591"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.369451 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2543d2ea-e2cd-45e5-b6e6-8665b2514591" (UID: "2543d2ea-e2cd-45e5-b6e6-8665b2514591"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.370988 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2543d2ea-e2cd-45e5-b6e6-8665b2514591" (UID: "2543d2ea-e2cd-45e5-b6e6-8665b2514591"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.418003 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.418543 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.418705 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.418850 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l95ns\" (UniqueName: \"kubernetes.io/projected/2543d2ea-e2cd-45e5-b6e6-8665b2514591-kube-api-access-l95ns\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.419135 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2543d2ea-e2cd-45e5-b6e6-8665b2514591-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.518799 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dd8bbd5cf-56g7r"] Jan 29 08:04:06 crc kubenswrapper[5017]: I0129 08:04:06.530830 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dd8bbd5cf-56g7r"] Jan 29 08:04:08 crc kubenswrapper[5017]: I0129 08:04:08.329665 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2543d2ea-e2cd-45e5-b6e6-8665b2514591" path="/var/lib/kubelet/pods/2543d2ea-e2cd-45e5-b6e6-8665b2514591/volumes" Jan 29 08:04:25 crc kubenswrapper[5017]: I0129 08:04:25.982558 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f4578d465-ntwp5" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.885198 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pscn6"] Jan 29 08:04:32 crc kubenswrapper[5017]: E0129 08:04:32.886482 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2543d2ea-e2cd-45e5-b6e6-8665b2514591" containerName="dnsmasq-dns" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.886509 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2543d2ea-e2cd-45e5-b6e6-8665b2514591" containerName="dnsmasq-dns" Jan 29 08:04:32 crc kubenswrapper[5017]: E0129 08:04:32.886536 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2543d2ea-e2cd-45e5-b6e6-8665b2514591" containerName="init" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.886547 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2543d2ea-e2cd-45e5-b6e6-8665b2514591" containerName="init" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.886750 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="2543d2ea-e2cd-45e5-b6e6-8665b2514591" containerName="dnsmasq-dns" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.887410 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pscn6" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.913206 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pscn6"] Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.969604 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfws\" (UniqueName: \"kubernetes.io/projected/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-kube-api-access-cvfws\") pod \"glance-db-create-pscn6\" (UID: \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\") " pod="openstack/glance-db-create-pscn6" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.969697 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-operator-scripts\") pod \"glance-db-create-pscn6\" (UID: \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\") " pod="openstack/glance-db-create-pscn6" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.969808 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3494-account-create-update-j4dxd"] Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.971582 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.974646 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 08:04:32 crc kubenswrapper[5017]: I0129 08:04:32.983618 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3494-account-create-update-j4dxd"] Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.072210 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncvcz\" (UniqueName: \"kubernetes.io/projected/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-kube-api-access-ncvcz\") pod \"glance-3494-account-create-update-j4dxd\" (UID: \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\") " pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.072306 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfws\" (UniqueName: \"kubernetes.io/projected/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-kube-api-access-cvfws\") pod \"glance-db-create-pscn6\" (UID: \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\") " pod="openstack/glance-db-create-pscn6" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.072374 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-operator-scripts\") pod \"glance-db-create-pscn6\" (UID: \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\") " pod="openstack/glance-db-create-pscn6" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.072482 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-operator-scripts\") pod \"glance-3494-account-create-update-j4dxd\" (UID: \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\") " pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.074238 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-operator-scripts\") pod \"glance-db-create-pscn6\" (UID: \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\") " pod="openstack/glance-db-create-pscn6" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.099789 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfws\" (UniqueName: \"kubernetes.io/projected/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-kube-api-access-cvfws\") pod \"glance-db-create-pscn6\" (UID: \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\") " pod="openstack/glance-db-create-pscn6" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.174263 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-operator-scripts\") pod \"glance-3494-account-create-update-j4dxd\" (UID: \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\") " pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.174701 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncvcz\" (UniqueName: \"kubernetes.io/projected/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-kube-api-access-ncvcz\") pod \"glance-3494-account-create-update-j4dxd\" (UID: \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\") " pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.175344 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-operator-scripts\") pod \"glance-3494-account-create-update-j4dxd\" (UID: \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\") " pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.197213 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncvcz\" (UniqueName: \"kubernetes.io/projected/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-kube-api-access-ncvcz\") pod \"glance-3494-account-create-update-j4dxd\" (UID: \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\") " pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.217424 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pscn6" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.289914 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.686809 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pscn6"] Jan 29 08:04:33 crc kubenswrapper[5017]: I0129 08:04:33.799620 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3494-account-create-update-j4dxd"] Jan 29 08:04:33 crc kubenswrapper[5017]: W0129 08:04:33.813838 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e77bce8_2efe_4f18_b9df_6cd6e7d36e31.slice/crio-ec127506b68e0312b84fff4358fd69b782c071618c9790014825995829adab67 WatchSource:0}: Error finding container ec127506b68e0312b84fff4358fd69b782c071618c9790014825995829adab67: Status 404 returned error can't find the container with id ec127506b68e0312b84fff4358fd69b782c071618c9790014825995829adab67 Jan 29 08:04:34 crc kubenswrapper[5017]: I0129 08:04:34.485328 5017 generic.go:334] "Generic (PLEG): container finished" podID="6e77bce8-2efe-4f18-b9df-6cd6e7d36e31" containerID="5dda04fed68552f36aa1b3bd678efb7946e5f6f0dffd99a2de8f064e40a65e89" exitCode=0 Jan 29 08:04:34 crc kubenswrapper[5017]: I0129 08:04:34.485462 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3494-account-create-update-j4dxd" event={"ID":"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31","Type":"ContainerDied","Data":"5dda04fed68552f36aa1b3bd678efb7946e5f6f0dffd99a2de8f064e40a65e89"} Jan 29 08:04:34 crc kubenswrapper[5017]: I0129 08:04:34.485504 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3494-account-create-update-j4dxd" event={"ID":"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31","Type":"ContainerStarted","Data":"ec127506b68e0312b84fff4358fd69b782c071618c9790014825995829adab67"} Jan 29 08:04:34 crc kubenswrapper[5017]: I0129 08:04:34.487431 5017 generic.go:334] "Generic (PLEG): container finished" podID="b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792" containerID="78338d2b9917439c27accb752beb4fb33ecc7e2a21b40e4fe9dea74b772e3b7a" exitCode=0 Jan 29 08:04:34 crc kubenswrapper[5017]: I0129 08:04:34.487478 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pscn6" event={"ID":"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792","Type":"ContainerDied","Data":"78338d2b9917439c27accb752beb4fb33ecc7e2a21b40e4fe9dea74b772e3b7a"} Jan 29 08:04:34 crc kubenswrapper[5017]: I0129 08:04:34.487537 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pscn6" event={"ID":"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792","Type":"ContainerStarted","Data":"5e41ef2c7d77b03050121ad5c008029b31082633535b49adb2f387674dec8104"} Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.911468 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.921236 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pscn6" Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.936120 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-operator-scripts\") pod \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\" (UID: \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\") " Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.936197 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-operator-scripts\") pod \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\" (UID: \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\") " Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.936262 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncvcz\" (UniqueName: \"kubernetes.io/projected/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-kube-api-access-ncvcz\") pod \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\" (UID: \"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31\") " Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.936413 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvfws\" (UniqueName: \"kubernetes.io/projected/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-kube-api-access-cvfws\") pod \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\" (UID: \"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792\") " Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.936999 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792" (UID: "b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.937009 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e77bce8-2efe-4f18-b9df-6cd6e7d36e31" (UID: "6e77bce8-2efe-4f18-b9df-6cd6e7d36e31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.937930 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.937950 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.944428 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-kube-api-access-cvfws" (OuterVolumeSpecName: "kube-api-access-cvfws") pod "b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792" (UID: "b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792"). InnerVolumeSpecName "kube-api-access-cvfws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:35 crc kubenswrapper[5017]: I0129 08:04:35.945803 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-kube-api-access-ncvcz" (OuterVolumeSpecName: "kube-api-access-ncvcz") pod "6e77bce8-2efe-4f18-b9df-6cd6e7d36e31" (UID: "6e77bce8-2efe-4f18-b9df-6cd6e7d36e31"). InnerVolumeSpecName "kube-api-access-ncvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:36 crc kubenswrapper[5017]: I0129 08:04:36.039877 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncvcz\" (UniqueName: \"kubernetes.io/projected/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31-kube-api-access-ncvcz\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:36 crc kubenswrapper[5017]: I0129 08:04:36.039937 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvfws\" (UniqueName: \"kubernetes.io/projected/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792-kube-api-access-cvfws\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:36 crc kubenswrapper[5017]: I0129 08:04:36.516009 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3494-account-create-update-j4dxd" event={"ID":"6e77bce8-2efe-4f18-b9df-6cd6e7d36e31","Type":"ContainerDied","Data":"ec127506b68e0312b84fff4358fd69b782c071618c9790014825995829adab67"} Jan 29 08:04:36 crc kubenswrapper[5017]: I0129 08:04:36.516344 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec127506b68e0312b84fff4358fd69b782c071618c9790014825995829adab67" Jan 29 08:04:36 crc kubenswrapper[5017]: I0129 08:04:36.516369 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3494-account-create-update-j4dxd" Jan 29 08:04:36 crc kubenswrapper[5017]: I0129 08:04:36.519218 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pscn6" event={"ID":"b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792","Type":"ContainerDied","Data":"5e41ef2c7d77b03050121ad5c008029b31082633535b49adb2f387674dec8104"} Jan 29 08:04:36 crc kubenswrapper[5017]: I0129 08:04:36.519285 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e41ef2c7d77b03050121ad5c008029b31082633535b49adb2f387674dec8104" Jan 29 08:04:36 crc kubenswrapper[5017]: I0129 08:04:36.519704 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pscn6" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.158550 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gmv6z"] Jan 29 08:04:38 crc kubenswrapper[5017]: E0129 08:04:38.164087 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e77bce8-2efe-4f18-b9df-6cd6e7d36e31" containerName="mariadb-account-create-update" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.164134 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e77bce8-2efe-4f18-b9df-6cd6e7d36e31" containerName="mariadb-account-create-update" Jan 29 08:04:38 crc kubenswrapper[5017]: E0129 08:04:38.164182 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792" containerName="mariadb-database-create" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.164191 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792" containerName="mariadb-database-create" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.164534 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e77bce8-2efe-4f18-b9df-6cd6e7d36e31" containerName="mariadb-account-create-update" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.164561 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792" containerName="mariadb-database-create" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.165455 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.167544 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.167654 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-k9zb7" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.183166 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-combined-ca-bundle\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.183304 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-config-data\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.183522 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl2xr\" (UniqueName: \"kubernetes.io/projected/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-kube-api-access-jl2xr\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.183841 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-db-sync-config-data\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.192547 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gmv6z"] Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.285846 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-combined-ca-bundle\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.286499 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-config-data\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.286541 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl2xr\" (UniqueName: \"kubernetes.io/projected/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-kube-api-access-jl2xr\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.286633 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-db-sync-config-data\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.293697 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-db-sync-config-data\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.294092 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-config-data\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.306598 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-combined-ca-bundle\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.307846 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl2xr\" (UniqueName: \"kubernetes.io/projected/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-kube-api-access-jl2xr\") pod \"glance-db-sync-gmv6z\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:38 crc kubenswrapper[5017]: I0129 08:04:38.489179 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:39 crc kubenswrapper[5017]: I0129 08:04:39.118898 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gmv6z"] Jan 29 08:04:39 crc kubenswrapper[5017]: I0129 08:04:39.586646 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gmv6z" event={"ID":"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251","Type":"ContainerStarted","Data":"e9c9b2259d9a05c48a1084ae80afcdf9ef1ac9d4a97aa2bd2304f3b3f1f9b859"} Jan 29 08:04:40 crc kubenswrapper[5017]: I0129 08:04:40.596409 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gmv6z" event={"ID":"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251","Type":"ContainerStarted","Data":"be442bb7f7caa1d142e1e0c0c6980c86f751d92c460445419a08c6de49061f8e"} Jan 29 08:04:40 crc kubenswrapper[5017]: I0129 08:04:40.615625 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gmv6z" podStartSLOduration=2.6155927549999998 podStartE2EDuration="2.615592755s" podCreationTimestamp="2026-01-29 08:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:04:40.61331218 +0000 UTC m=+5366.987759810" watchObservedRunningTime="2026-01-29 08:04:40.615592755 +0000 UTC m=+5366.990040365" Jan 29 08:04:43 crc kubenswrapper[5017]: I0129 08:04:43.634596 5017 generic.go:334] "Generic (PLEG): container finished" podID="b19bcfb0-b10d-4b27-a6d9-8ca701b3d251" containerID="be442bb7f7caa1d142e1e0c0c6980c86f751d92c460445419a08c6de49061f8e" exitCode=0 Jan 29 08:04:43 crc kubenswrapper[5017]: I0129 08:04:43.634726 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gmv6z" event={"ID":"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251","Type":"ContainerDied","Data":"be442bb7f7caa1d142e1e0c0c6980c86f751d92c460445419a08c6de49061f8e"} Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.034745 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.233064 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl2xr\" (UniqueName: \"kubernetes.io/projected/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-kube-api-access-jl2xr\") pod \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.233376 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-db-sync-config-data\") pod \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.233428 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-combined-ca-bundle\") pod \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.233528 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-config-data\") pod \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\" (UID: \"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251\") " Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.242195 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b19bcfb0-b10d-4b27-a6d9-8ca701b3d251" (UID: "b19bcfb0-b10d-4b27-a6d9-8ca701b3d251"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.243130 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-kube-api-access-jl2xr" (OuterVolumeSpecName: "kube-api-access-jl2xr") pod "b19bcfb0-b10d-4b27-a6d9-8ca701b3d251" (UID: "b19bcfb0-b10d-4b27-a6d9-8ca701b3d251"). InnerVolumeSpecName "kube-api-access-jl2xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.272505 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b19bcfb0-b10d-4b27-a6d9-8ca701b3d251" (UID: "b19bcfb0-b10d-4b27-a6d9-8ca701b3d251"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.314579 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-config-data" (OuterVolumeSpecName: "config-data") pod "b19bcfb0-b10d-4b27-a6d9-8ca701b3d251" (UID: "b19bcfb0-b10d-4b27-a6d9-8ca701b3d251"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.336910 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl2xr\" (UniqueName: \"kubernetes.io/projected/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-kube-api-access-jl2xr\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.336981 5017 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.337001 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.337016 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.657779 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gmv6z" event={"ID":"b19bcfb0-b10d-4b27-a6d9-8ca701b3d251","Type":"ContainerDied","Data":"e9c9b2259d9a05c48a1084ae80afcdf9ef1ac9d4a97aa2bd2304f3b3f1f9b859"} Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.657833 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9c9b2259d9a05c48a1084ae80afcdf9ef1ac9d4a97aa2bd2304f3b3f1f9b859" Jan 29 08:04:45 crc kubenswrapper[5017]: I0129 08:04:45.657902 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gmv6z" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.152266 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c74747687-njvq6"] Jan 29 08:04:46 crc kubenswrapper[5017]: E0129 08:04:46.152965 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19bcfb0-b10d-4b27-a6d9-8ca701b3d251" containerName="glance-db-sync" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.152981 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19bcfb0-b10d-4b27-a6d9-8ca701b3d251" containerName="glance-db-sync" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.153168 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19bcfb0-b10d-4b27-a6d9-8ca701b3d251" containerName="glance-db-sync" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.154121 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.166302 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8lc\" (UniqueName: \"kubernetes.io/projected/5284dd1a-96ec-439f-a7f0-27df8a4cd656-kube-api-access-rp8lc\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.166344 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-config\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.166392 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-dns-svc\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.166756 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-sb\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.166924 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-nb\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.173606 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.175599 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.182612 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.183777 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.186444 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.191636 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-k9zb7" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.203672 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c74747687-njvq6"] Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.253003 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.271762 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-nb\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.271820 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-logs\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.271849 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.271871 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp8lc\" (UniqueName: \"kubernetes.io/projected/5284dd1a-96ec-439f-a7f0-27df8a4cd656-kube-api-access-rp8lc\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.271891 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-config\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.271909 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-ceph\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.271929 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.271950 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-dns-svc\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.272005 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jz8\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-kube-api-access-d4jz8\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.272055 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-sb\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.272081 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.272098 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.273058 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-nb\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.273808 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-config\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.274382 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-dns-svc\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.274987 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-sb\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.299137 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp8lc\" (UniqueName: \"kubernetes.io/projected/5284dd1a-96ec-439f-a7f0-27df8a4cd656-kube-api-access-rp8lc\") pod \"dnsmasq-dns-5c74747687-njvq6\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.373362 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-logs\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.373444 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.373470 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-ceph\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.373495 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.373553 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jz8\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-kube-api-access-d4jz8\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.373623 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.373645 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.375048 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-logs\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.375320 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.404848 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-ceph\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.405524 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-config-data\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.416991 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.419687 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-scripts\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.422606 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jz8\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-kube-api-access-d4jz8\") pod \"glance-default-external-api-0\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.476816 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.519144 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.550418 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.551844 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.555726 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.576075 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.683045 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.683820 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-config-data\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.683869 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-logs\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.684026 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.684341 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-ceph\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.684636 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h998\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-kube-api-access-5h998\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.685509 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-scripts\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.787185 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-config-data\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.787241 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-logs\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.787273 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.787305 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-ceph\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.787338 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h998\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-kube-api-access-5h998\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.787386 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-scripts\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.787426 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.788448 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.788818 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-logs\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.793908 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.793996 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-ceph\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.794804 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-config-data\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.795833 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-scripts\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.812092 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h998\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-kube-api-access-5h998\") pod \"glance-default-internal-api-0\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:46 crc kubenswrapper[5017]: I0129 08:04:46.919877 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:04:47 crc kubenswrapper[5017]: I0129 08:04:47.089497 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c74747687-njvq6"] Jan 29 08:04:47 crc kubenswrapper[5017]: I0129 08:04:47.249473 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:04:47 crc kubenswrapper[5017]: W0129 08:04:47.252571 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3166fea_5e21_4e40_a24f_6d1b64e65ca1.slice/crio-cc3085f24d04fbcc4b7fd458663ed9c6377dc8c7aae32309bba86c08a736d8c2 WatchSource:0}: Error finding container cc3085f24d04fbcc4b7fd458663ed9c6377dc8c7aae32309bba86c08a736d8c2: Status 404 returned error can't find the container with id cc3085f24d04fbcc4b7fd458663ed9c6377dc8c7aae32309bba86c08a736d8c2 Jan 29 08:04:47 crc kubenswrapper[5017]: I0129 08:04:47.496180 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:04:47 crc kubenswrapper[5017]: I0129 08:04:47.603903 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:04:47 crc kubenswrapper[5017]: I0129 08:04:47.715358 5017 generic.go:334] "Generic (PLEG): container finished" podID="5284dd1a-96ec-439f-a7f0-27df8a4cd656" containerID="a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683" exitCode=0 Jan 29 08:04:47 crc kubenswrapper[5017]: I0129 08:04:47.715499 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c74747687-njvq6" event={"ID":"5284dd1a-96ec-439f-a7f0-27df8a4cd656","Type":"ContainerDied","Data":"a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683"} Jan 29 08:04:47 crc kubenswrapper[5017]: I0129 08:04:47.715548 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c74747687-njvq6" event={"ID":"5284dd1a-96ec-439f-a7f0-27df8a4cd656","Type":"ContainerStarted","Data":"6a2034ef5cf1d731f175527c036936a12fb038cc79a111f90dd4ef5f1d00b76b"} Jan 29 08:04:47 crc kubenswrapper[5017]: I0129 08:04:47.717380 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3166fea-5e21-4e40-a24f-6d1b64e65ca1","Type":"ContainerStarted","Data":"cc3085f24d04fbcc4b7fd458663ed9c6377dc8c7aae32309bba86c08a736d8c2"} Jan 29 08:04:47 crc kubenswrapper[5017]: I0129 08:04:47.720930 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66dfbef7-be9f-48f5-8048-681134c26dea","Type":"ContainerStarted","Data":"0b8052ffd52ad805bde39bd82cf8b2cef65de9e78c406cd1192332973d6dd278"} Jan 29 08:04:48 crc kubenswrapper[5017]: I0129 08:04:48.737550 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c74747687-njvq6" event={"ID":"5284dd1a-96ec-439f-a7f0-27df8a4cd656","Type":"ContainerStarted","Data":"ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40"} Jan 29 08:04:48 crc kubenswrapper[5017]: I0129 08:04:48.738405 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:48 crc kubenswrapper[5017]: I0129 08:04:48.741626 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3166fea-5e21-4e40-a24f-6d1b64e65ca1","Type":"ContainerStarted","Data":"027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9"} Jan 29 08:04:48 crc kubenswrapper[5017]: I0129 08:04:48.741683 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3166fea-5e21-4e40-a24f-6d1b64e65ca1","Type":"ContainerStarted","Data":"d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573"} Jan 29 08:04:48 crc kubenswrapper[5017]: I0129 08:04:48.741831 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerName="glance-log" containerID="cri-o://d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573" gracePeriod=30 Jan 29 08:04:48 crc kubenswrapper[5017]: I0129 08:04:48.741946 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerName="glance-httpd" containerID="cri-o://027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9" gracePeriod=30 Jan 29 08:04:48 crc kubenswrapper[5017]: I0129 08:04:48.746677 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66dfbef7-be9f-48f5-8048-681134c26dea","Type":"ContainerStarted","Data":"86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a"} Jan 29 08:04:48 crc kubenswrapper[5017]: I0129 08:04:48.770682 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c74747687-njvq6" podStartSLOduration=2.7706574980000003 podStartE2EDuration="2.770657498s" podCreationTimestamp="2026-01-29 08:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:04:48.759772896 +0000 UTC m=+5375.134220506" watchObservedRunningTime="2026-01-29 08:04:48.770657498 +0000 UTC m=+5375.145105108" Jan 29 08:04:48 crc kubenswrapper[5017]: I0129 08:04:48.794031 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.794008348 podStartE2EDuration="2.794008348s" podCreationTimestamp="2026-01-29 08:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:04:48.792551513 +0000 UTC m=+5375.166999123" watchObservedRunningTime="2026-01-29 08:04:48.794008348 +0000 UTC m=+5375.168455958" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.231342 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.387573 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-config-data\") pod \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.387644 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4jz8\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-kube-api-access-d4jz8\") pod \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.387705 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-logs\") pod \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.388295 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-httpd-run\") pod \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.388450 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-logs" (OuterVolumeSpecName: "logs") pod "c3166fea-5e21-4e40-a24f-6d1b64e65ca1" (UID: "c3166fea-5e21-4e40-a24f-6d1b64e65ca1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.388563 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-scripts\") pod \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.388641 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-combined-ca-bundle\") pod \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.388645 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c3166fea-5e21-4e40-a24f-6d1b64e65ca1" (UID: "c3166fea-5e21-4e40-a24f-6d1b64e65ca1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.388802 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-ceph\") pod \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\" (UID: \"c3166fea-5e21-4e40-a24f-6d1b64e65ca1\") " Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.389711 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.389740 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.396021 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-ceph" (OuterVolumeSpecName: "ceph") pod "c3166fea-5e21-4e40-a24f-6d1b64e65ca1" (UID: "c3166fea-5e21-4e40-a24f-6d1b64e65ca1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.409161 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-scripts" (OuterVolumeSpecName: "scripts") pod "c3166fea-5e21-4e40-a24f-6d1b64e65ca1" (UID: "c3166fea-5e21-4e40-a24f-6d1b64e65ca1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.411908 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-kube-api-access-d4jz8" (OuterVolumeSpecName: "kube-api-access-d4jz8") pod "c3166fea-5e21-4e40-a24f-6d1b64e65ca1" (UID: "c3166fea-5e21-4e40-a24f-6d1b64e65ca1"). InnerVolumeSpecName "kube-api-access-d4jz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.419104 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3166fea-5e21-4e40-a24f-6d1b64e65ca1" (UID: "c3166fea-5e21-4e40-a24f-6d1b64e65ca1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.446245 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-config-data" (OuterVolumeSpecName: "config-data") pod "c3166fea-5e21-4e40-a24f-6d1b64e65ca1" (UID: "c3166fea-5e21-4e40-a24f-6d1b64e65ca1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.491582 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.491637 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.491652 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.491666 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.491681 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4jz8\" (UniqueName: \"kubernetes.io/projected/c3166fea-5e21-4e40-a24f-6d1b64e65ca1-kube-api-access-d4jz8\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.728887 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.764064 5017 generic.go:334] "Generic (PLEG): container finished" podID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerID="027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9" exitCode=143 Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.764107 5017 generic.go:334] "Generic (PLEG): container finished" podID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerID="d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573" exitCode=143 Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.764174 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3166fea-5e21-4e40-a24f-6d1b64e65ca1","Type":"ContainerDied","Data":"027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9"} Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.764213 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3166fea-5e21-4e40-a24f-6d1b64e65ca1","Type":"ContainerDied","Data":"d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573"} Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.764229 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c3166fea-5e21-4e40-a24f-6d1b64e65ca1","Type":"ContainerDied","Data":"cc3085f24d04fbcc4b7fd458663ed9c6377dc8c7aae32309bba86c08a736d8c2"} Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.764250 5017 scope.go:117] "RemoveContainer" containerID="027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.764406 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.768950 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66dfbef7-be9f-48f5-8048-681134c26dea","Type":"ContainerStarted","Data":"37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b"} Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.799694 5017 scope.go:117] "RemoveContainer" containerID="d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.808261 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.808206248 podStartE2EDuration="3.808206248s" podCreationTimestamp="2026-01-29 08:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:04:49.801989088 +0000 UTC m=+5376.176436718" watchObservedRunningTime="2026-01-29 08:04:49.808206248 +0000 UTC m=+5376.182653868" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.838610 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.843508 5017 scope.go:117] "RemoveContainer" containerID="027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9" Jan 29 08:04:49 crc kubenswrapper[5017]: E0129 08:04:49.844308 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9\": container with ID starting with 027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9 not found: ID does not exist" containerID="027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.844377 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9"} err="failed to get container status \"027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9\": rpc error: code = NotFound desc = could not find container \"027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9\": container with ID starting with 027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9 not found: ID does not exist" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.844415 5017 scope.go:117] "RemoveContainer" containerID="d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573" Jan 29 08:04:49 crc kubenswrapper[5017]: E0129 08:04:49.849660 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573\": container with ID starting with d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573 not found: ID does not exist" containerID="d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.849701 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573"} err="failed to get container status \"d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573\": rpc error: code = NotFound desc = could not find container \"d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573\": container with ID starting with d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573 not found: ID does not exist" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.849742 5017 scope.go:117] "RemoveContainer" containerID="027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.851349 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9"} err="failed to get container status \"027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9\": rpc error: code = NotFound desc = could not find container \"027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9\": container with ID starting with 027fa126a9c714b46f701be6e7c2904a42a7c6618666edec79a384c3b86aa4f9 not found: ID does not exist" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.851400 5017 scope.go:117] "RemoveContainer" containerID="d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.854915 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573"} err="failed to get container status \"d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573\": rpc error: code = NotFound desc = could not find container \"d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573\": container with ID starting with d79123ad8d35a2ed33e4e6f4e2428247fe8135d81425f9a75b03cf6f123ef573 not found: ID does not exist" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.860649 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.874161 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:04:49 crc kubenswrapper[5017]: E0129 08:04:49.874739 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerName="glance-httpd" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.874768 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerName="glance-httpd" Jan 29 08:04:49 crc kubenswrapper[5017]: E0129 08:04:49.874791 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerName="glance-log" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.874799 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerName="glance-log" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.875085 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerName="glance-httpd" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.875117 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" containerName="glance-log" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.876620 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.880333 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.895101 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.901039 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-scripts\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.901290 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-logs\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.901549 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.901702 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwmzx\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-kube-api-access-pwmzx\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.901811 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.901934 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-config-data\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:49 crc kubenswrapper[5017]: I0129 08:04:49.902109 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-ceph\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.004861 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwmzx\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-kube-api-access-pwmzx\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.004930 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.005002 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-config-data\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.005049 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-ceph\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.005093 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-scripts\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.005127 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-logs\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.005204 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.006342 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-logs\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.010325 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.011733 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.013638 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-config-data\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.014274 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-scripts\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.021618 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-ceph\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.028056 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwmzx\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-kube-api-access-pwmzx\") pod \"glance-default-external-api-0\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.199028 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.331623 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3166fea-5e21-4e40-a24f-6d1b64e65ca1" path="/var/lib/kubelet/pods/c3166fea-5e21-4e40-a24f-6d1b64e65ca1/volumes" Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.781911 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="66dfbef7-be9f-48f5-8048-681134c26dea" containerName="glance-log" containerID="cri-o://86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a" gracePeriod=30 Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.781967 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="66dfbef7-be9f-48f5-8048-681134c26dea" containerName="glance-httpd" containerID="cri-o://37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b" gracePeriod=30 Jan 29 08:04:50 crc kubenswrapper[5017]: I0129 08:04:50.864662 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:04:50 crc kubenswrapper[5017]: W0129 08:04:50.869499 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd04424_aea6_47bb_b4bf_833ab5c9ea57.slice/crio-97f3d8e4960cdaf0522fd3bcf4d0e2915715a355746d8f2f6b9dfe75919e0fe4 WatchSource:0}: Error finding container 97f3d8e4960cdaf0522fd3bcf4d0e2915715a355746d8f2f6b9dfe75919e0fe4: Status 404 returned error can't find the container with id 97f3d8e4960cdaf0522fd3bcf4d0e2915715a355746d8f2f6b9dfe75919e0fe4 Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.450734 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.551021 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h998\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-kube-api-access-5h998\") pod \"66dfbef7-be9f-48f5-8048-681134c26dea\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.551662 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-httpd-run\") pod \"66dfbef7-be9f-48f5-8048-681134c26dea\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.551736 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-scripts\") pod \"66dfbef7-be9f-48f5-8048-681134c26dea\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.551971 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-config-data\") pod \"66dfbef7-be9f-48f5-8048-681134c26dea\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.552033 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-combined-ca-bundle\") pod \"66dfbef7-be9f-48f5-8048-681134c26dea\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.552094 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-logs\") pod \"66dfbef7-be9f-48f5-8048-681134c26dea\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.552119 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-ceph\") pod \"66dfbef7-be9f-48f5-8048-681134c26dea\" (UID: \"66dfbef7-be9f-48f5-8048-681134c26dea\") " Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.552793 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "66dfbef7-be9f-48f5-8048-681134c26dea" (UID: "66dfbef7-be9f-48f5-8048-681134c26dea"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.557208 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-logs" (OuterVolumeSpecName: "logs") pod "66dfbef7-be9f-48f5-8048-681134c26dea" (UID: "66dfbef7-be9f-48f5-8048-681134c26dea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.557356 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-scripts" (OuterVolumeSpecName: "scripts") pod "66dfbef7-be9f-48f5-8048-681134c26dea" (UID: "66dfbef7-be9f-48f5-8048-681134c26dea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.558902 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-ceph" (OuterVolumeSpecName: "ceph") pod "66dfbef7-be9f-48f5-8048-681134c26dea" (UID: "66dfbef7-be9f-48f5-8048-681134c26dea"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.563345 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-kube-api-access-5h998" (OuterVolumeSpecName: "kube-api-access-5h998") pod "66dfbef7-be9f-48f5-8048-681134c26dea" (UID: "66dfbef7-be9f-48f5-8048-681134c26dea"). InnerVolumeSpecName "kube-api-access-5h998". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.580835 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66dfbef7-be9f-48f5-8048-681134c26dea" (UID: "66dfbef7-be9f-48f5-8048-681134c26dea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.617976 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-config-data" (OuterVolumeSpecName: "config-data") pod "66dfbef7-be9f-48f5-8048-681134c26dea" (UID: "66dfbef7-be9f-48f5-8048-681134c26dea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.654411 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.654448 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.654461 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.654470 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.654489 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h998\" (UniqueName: \"kubernetes.io/projected/66dfbef7-be9f-48f5-8048-681134c26dea-kube-api-access-5h998\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.654500 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/66dfbef7-be9f-48f5-8048-681134c26dea-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.654510 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dfbef7-be9f-48f5-8048-681134c26dea-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.793469 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd04424-aea6-47bb-b4bf-833ab5c9ea57","Type":"ContainerStarted","Data":"c9ff0ee3329df41941bcea7c2383ccb1819a04fd5d49040732f48194c10e5e80"} Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.794122 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd04424-aea6-47bb-b4bf-833ab5c9ea57","Type":"ContainerStarted","Data":"97f3d8e4960cdaf0522fd3bcf4d0e2915715a355746d8f2f6b9dfe75919e0fe4"} Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.796051 5017 generic.go:334] "Generic (PLEG): container finished" podID="66dfbef7-be9f-48f5-8048-681134c26dea" containerID="37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b" exitCode=0 Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.796100 5017 generic.go:334] "Generic (PLEG): container finished" podID="66dfbef7-be9f-48f5-8048-681134c26dea" containerID="86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a" exitCode=143 Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.796138 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66dfbef7-be9f-48f5-8048-681134c26dea","Type":"ContainerDied","Data":"37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b"} Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.796181 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66dfbef7-be9f-48f5-8048-681134c26dea","Type":"ContainerDied","Data":"86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a"} Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.796199 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"66dfbef7-be9f-48f5-8048-681134c26dea","Type":"ContainerDied","Data":"0b8052ffd52ad805bde39bd82cf8b2cef65de9e78c406cd1192332973d6dd278"} Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.796220 5017 scope.go:117] "RemoveContainer" containerID="37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.796222 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.834606 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.842238 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.860767 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:04:51 crc kubenswrapper[5017]: E0129 08:04:51.861382 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66dfbef7-be9f-48f5-8048-681134c26dea" containerName="glance-log" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.861501 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="66dfbef7-be9f-48f5-8048-681134c26dea" containerName="glance-log" Jan 29 08:04:51 crc kubenswrapper[5017]: E0129 08:04:51.861532 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66dfbef7-be9f-48f5-8048-681134c26dea" containerName="glance-httpd" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.861540 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="66dfbef7-be9f-48f5-8048-681134c26dea" containerName="glance-httpd" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.861743 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="66dfbef7-be9f-48f5-8048-681134c26dea" containerName="glance-log" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.861766 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="66dfbef7-be9f-48f5-8048-681134c26dea" containerName="glance-httpd" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.862873 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.863081 5017 scope.go:117] "RemoveContainer" containerID="86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.866600 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.911001 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.921555 5017 scope.go:117] "RemoveContainer" containerID="37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b" Jan 29 08:04:51 crc kubenswrapper[5017]: E0129 08:04:51.922361 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b\": container with ID starting with 37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b not found: ID does not exist" containerID="37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.922421 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b"} err="failed to get container status \"37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b\": rpc error: code = NotFound desc = could not find container \"37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b\": container with ID starting with 37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b not found: ID does not exist" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.922461 5017 scope.go:117] "RemoveContainer" containerID="86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a" Jan 29 08:04:51 crc kubenswrapper[5017]: E0129 08:04:51.922913 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a\": container with ID starting with 86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a not found: ID does not exist" containerID="86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.923049 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a"} err="failed to get container status \"86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a\": rpc error: code = NotFound desc = could not find container \"86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a\": container with ID starting with 86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a not found: ID does not exist" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.923085 5017 scope.go:117] "RemoveContainer" containerID="37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.924287 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b"} err="failed to get container status \"37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b\": rpc error: code = NotFound desc = could not find container \"37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b\": container with ID starting with 37227e16ddca74ef54a465a205284d4756482aa859c5743da943e0759d83ad6b not found: ID does not exist" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.924357 5017 scope.go:117] "RemoveContainer" containerID="86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.924999 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a"} err="failed to get container status \"86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a\": rpc error: code = NotFound desc = could not find container \"86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a\": container with ID starting with 86b9d85c71dfd028d9f3368e26833f57f331ef22c6068be8a6902e475d441d7a not found: ID does not exist" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.960537 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.960586 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-ceph\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.960610 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.960803 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdvgx\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-kube-api-access-mdvgx\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.960926 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.961013 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:51 crc kubenswrapper[5017]: I0129 08:04:51.961186 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-logs\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.063322 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-ceph\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.063404 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.063450 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdvgx\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-kube-api-access-mdvgx\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.063489 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.064762 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.064884 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.065568 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-logs\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.065893 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.066310 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-logs\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.070754 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-ceph\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.071455 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.074004 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.074461 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.089040 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdvgx\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-kube-api-access-mdvgx\") pod \"glance-default-internal-api-0\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.206178 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.346060 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66dfbef7-be9f-48f5-8048-681134c26dea" path="/var/lib/kubelet/pods/66dfbef7-be9f-48f5-8048-681134c26dea/volumes" Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.814882 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.815871 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd04424-aea6-47bb-b4bf-833ab5c9ea57","Type":"ContainerStarted","Data":"8b6cbcde06dbdad04287b85fb2bd8ca25cdf080f9864141836bb63fa032ef620"} Jan 29 08:04:52 crc kubenswrapper[5017]: I0129 08:04:52.857918 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.857886317 podStartE2EDuration="3.857886317s" podCreationTimestamp="2026-01-29 08:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:04:52.840514499 +0000 UTC m=+5379.214962109" watchObservedRunningTime="2026-01-29 08:04:52.857886317 +0000 UTC m=+5379.232333927" Jan 29 08:04:53 crc kubenswrapper[5017]: I0129 08:04:53.830188 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4","Type":"ContainerStarted","Data":"a6a970e35eb8877153f15baa7bb62816a72fe62277d358f01aa92447687f1807"} Jan 29 08:04:53 crc kubenswrapper[5017]: I0129 08:04:53.830262 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4","Type":"ContainerStarted","Data":"c168e409a143e93b33beb7b59e4a8c9d96c335d8bfc5b9e6815832048263250e"} Jan 29 08:04:54 crc kubenswrapper[5017]: I0129 08:04:54.844078 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4","Type":"ContainerStarted","Data":"208feea7a15c43c3b401c6c2979853e03b1d20050ae08cd5bbcde48673fcf10e"} Jan 29 08:04:54 crc kubenswrapper[5017]: I0129 08:04:54.883270 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.883239002 podStartE2EDuration="3.883239002s" podCreationTimestamp="2026-01-29 08:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:04:54.866719335 +0000 UTC m=+5381.241166975" watchObservedRunningTime="2026-01-29 08:04:54.883239002 +0000 UTC m=+5381.257686612" Jan 29 08:04:56 crc kubenswrapper[5017]: I0129 08:04:56.479139 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:04:56 crc kubenswrapper[5017]: I0129 08:04:56.572745 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d455fc967-h89rn"] Jan 29 08:04:56 crc kubenswrapper[5017]: I0129 08:04:56.574357 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" podUID="99c11145-e568-4c3a-993a-289881689134" containerName="dnsmasq-dns" containerID="cri-o://e507ad840a1b67975c72dac2667bb13db3c31c087d34bf3edce322743c185169" gracePeriod=10 Jan 29 08:04:56 crc kubenswrapper[5017]: I0129 08:04:56.881466 5017 generic.go:334] "Generic (PLEG): container finished" podID="99c11145-e568-4c3a-993a-289881689134" containerID="e507ad840a1b67975c72dac2667bb13db3c31c087d34bf3edce322743c185169" exitCode=0 Jan 29 08:04:56 crc kubenswrapper[5017]: I0129 08:04:56.881554 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" event={"ID":"99c11145-e568-4c3a-993a-289881689134","Type":"ContainerDied","Data":"e507ad840a1b67975c72dac2667bb13db3c31c087d34bf3edce322743c185169"} Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.217525 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.393123 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-config\") pod \"99c11145-e568-4c3a-993a-289881689134\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.393179 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-nb\") pod \"99c11145-e568-4c3a-993a-289881689134\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.393272 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nlhx\" (UniqueName: \"kubernetes.io/projected/99c11145-e568-4c3a-993a-289881689134-kube-api-access-5nlhx\") pod \"99c11145-e568-4c3a-993a-289881689134\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.393442 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-sb\") pod \"99c11145-e568-4c3a-993a-289881689134\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.393465 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-dns-svc\") pod \"99c11145-e568-4c3a-993a-289881689134\" (UID: \"99c11145-e568-4c3a-993a-289881689134\") " Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.403475 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c11145-e568-4c3a-993a-289881689134-kube-api-access-5nlhx" (OuterVolumeSpecName: "kube-api-access-5nlhx") pod "99c11145-e568-4c3a-993a-289881689134" (UID: "99c11145-e568-4c3a-993a-289881689134"). InnerVolumeSpecName "kube-api-access-5nlhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.456541 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-config" (OuterVolumeSpecName: "config") pod "99c11145-e568-4c3a-993a-289881689134" (UID: "99c11145-e568-4c3a-993a-289881689134"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.469542 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99c11145-e568-4c3a-993a-289881689134" (UID: "99c11145-e568-4c3a-993a-289881689134"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.472285 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "99c11145-e568-4c3a-993a-289881689134" (UID: "99c11145-e568-4c3a-993a-289881689134"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.480317 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "99c11145-e568-4c3a-993a-289881689134" (UID: "99c11145-e568-4c3a-993a-289881689134"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.496382 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.496424 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.496437 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.496446 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99c11145-e568-4c3a-993a-289881689134-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.496459 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nlhx\" (UniqueName: \"kubernetes.io/projected/99c11145-e568-4c3a-993a-289881689134-kube-api-access-5nlhx\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.897267 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" event={"ID":"99c11145-e568-4c3a-993a-289881689134","Type":"ContainerDied","Data":"b3e1d9924f1f925d6159719df61fd3af2101c7a2d8d35f8bef9ae6fc86161a94"} Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.897376 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d455fc967-h89rn" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.897872 5017 scope.go:117] "RemoveContainer" containerID="e507ad840a1b67975c72dac2667bb13db3c31c087d34bf3edce322743c185169" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.939412 5017 scope.go:117] "RemoveContainer" containerID="d123d868059f6c36e00acba0fe36172b1c25f5b97632efacf49c97835c71532c" Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.943830 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d455fc967-h89rn"] Jan 29 08:04:57 crc kubenswrapper[5017]: I0129 08:04:57.952865 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d455fc967-h89rn"] Jan 29 08:04:58 crc kubenswrapper[5017]: I0129 08:04:58.329031 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c11145-e568-4c3a-993a-289881689134" path="/var/lib/kubelet/pods/99c11145-e568-4c3a-993a-289881689134/volumes" Jan 29 08:05:00 crc kubenswrapper[5017]: I0129 08:05:00.200076 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 08:05:00 crc kubenswrapper[5017]: I0129 08:05:00.200524 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 08:05:00 crc kubenswrapper[5017]: I0129 08:05:00.241148 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 08:05:00 crc kubenswrapper[5017]: I0129 08:05:00.254846 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 08:05:00 crc kubenswrapper[5017]: I0129 08:05:00.935028 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 08:05:00 crc kubenswrapper[5017]: I0129 08:05:00.935211 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 08:05:02 crc kubenswrapper[5017]: I0129 08:05:02.207436 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 08:05:02 crc kubenswrapper[5017]: I0129 08:05:02.207542 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 08:05:02 crc kubenswrapper[5017]: I0129 08:05:02.250074 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 08:05:02 crc kubenswrapper[5017]: I0129 08:05:02.267346 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 08:05:02 crc kubenswrapper[5017]: I0129 08:05:02.952043 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:05:02 crc kubenswrapper[5017]: I0129 08:05:02.952389 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:05:02 crc kubenswrapper[5017]: I0129 08:05:02.952773 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 08:05:02 crc kubenswrapper[5017]: I0129 08:05:02.952968 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 08:05:03 crc kubenswrapper[5017]: I0129 08:05:03.099238 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 08:05:03 crc kubenswrapper[5017]: I0129 08:05:03.181589 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 08:05:04 crc kubenswrapper[5017]: I0129 08:05:04.970760 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:05:04 crc kubenswrapper[5017]: I0129 08:05:04.971264 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:05:05 crc kubenswrapper[5017]: I0129 08:05:05.086826 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 08:05:05 crc kubenswrapper[5017]: I0129 08:05:05.127812 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.076593 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ht5bd"] Jan 29 08:05:13 crc kubenswrapper[5017]: E0129 08:05:13.078307 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c11145-e568-4c3a-993a-289881689134" containerName="dnsmasq-dns" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.078324 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c11145-e568-4c3a-993a-289881689134" containerName="dnsmasq-dns" Jan 29 08:05:13 crc kubenswrapper[5017]: E0129 08:05:13.078357 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c11145-e568-4c3a-993a-289881689134" containerName="init" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.078364 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c11145-e568-4c3a-993a-289881689134" containerName="init" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.078614 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c11145-e568-4c3a-993a-289881689134" containerName="dnsmasq-dns" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.082265 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.089041 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ht5bd"] Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.248893 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-catalog-content\") pod \"redhat-operators-ht5bd\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.248995 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mk8\" (UniqueName: \"kubernetes.io/projected/62a2b45e-53f3-4c67-ad22-c1c87017fade-kube-api-access-h2mk8\") pod \"redhat-operators-ht5bd\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.249383 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-utilities\") pod \"redhat-operators-ht5bd\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.351633 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-catalog-content\") pod \"redhat-operators-ht5bd\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.351728 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2mk8\" (UniqueName: \"kubernetes.io/projected/62a2b45e-53f3-4c67-ad22-c1c87017fade-kube-api-access-h2mk8\") pod \"redhat-operators-ht5bd\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.351792 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-utilities\") pod \"redhat-operators-ht5bd\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.352335 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-catalog-content\") pod \"redhat-operators-ht5bd\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.352409 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-utilities\") pod \"redhat-operators-ht5bd\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.373412 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2mk8\" (UniqueName: \"kubernetes.io/projected/62a2b45e-53f3-4c67-ad22-c1c87017fade-kube-api-access-h2mk8\") pod \"redhat-operators-ht5bd\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.458787 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:13 crc kubenswrapper[5017]: I0129 08:05:13.930928 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ht5bd"] Jan 29 08:05:14 crc kubenswrapper[5017]: I0129 08:05:14.080509 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht5bd" event={"ID":"62a2b45e-53f3-4c67-ad22-c1c87017fade","Type":"ContainerStarted","Data":"701f3bbbeb70184b5c174844dccb8781ff891d275800b796a1407a18ac7d4c2b"} Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.092351 5017 generic.go:334] "Generic (PLEG): container finished" podID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerID="0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d" exitCode=0 Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.092477 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht5bd" event={"ID":"62a2b45e-53f3-4c67-ad22-c1c87017fade","Type":"ContainerDied","Data":"0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d"} Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.095890 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.498143 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7m4xr"] Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.499779 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.522909 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7m4xr"] Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.609017 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79553a3-6dd6-42d0-a988-dfec53583ae2-operator-scripts\") pod \"placement-db-create-7m4xr\" (UID: \"a79553a3-6dd6-42d0-a988-dfec53583ae2\") " pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.609194 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8wpx\" (UniqueName: \"kubernetes.io/projected/a79553a3-6dd6-42d0-a988-dfec53583ae2-kube-api-access-w8wpx\") pod \"placement-db-create-7m4xr\" (UID: \"a79553a3-6dd6-42d0-a988-dfec53583ae2\") " pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.610780 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-375b-account-create-update-jplg6"] Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.622055 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.623069 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-375b-account-create-update-jplg6"] Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.632468 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.711944 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-operator-scripts\") pod \"placement-375b-account-create-update-jplg6\" (UID: \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\") " pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.712077 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79553a3-6dd6-42d0-a988-dfec53583ae2-operator-scripts\") pod \"placement-db-create-7m4xr\" (UID: \"a79553a3-6dd6-42d0-a988-dfec53583ae2\") " pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.712201 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8wpx\" (UniqueName: \"kubernetes.io/projected/a79553a3-6dd6-42d0-a988-dfec53583ae2-kube-api-access-w8wpx\") pod \"placement-db-create-7m4xr\" (UID: \"a79553a3-6dd6-42d0-a988-dfec53583ae2\") " pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.712317 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlkn8\" (UniqueName: \"kubernetes.io/projected/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-kube-api-access-hlkn8\") pod \"placement-375b-account-create-update-jplg6\" (UID: \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\") " pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.713119 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79553a3-6dd6-42d0-a988-dfec53583ae2-operator-scripts\") pod \"placement-db-create-7m4xr\" (UID: \"a79553a3-6dd6-42d0-a988-dfec53583ae2\") " pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.737747 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8wpx\" (UniqueName: \"kubernetes.io/projected/a79553a3-6dd6-42d0-a988-dfec53583ae2-kube-api-access-w8wpx\") pod \"placement-db-create-7m4xr\" (UID: \"a79553a3-6dd6-42d0-a988-dfec53583ae2\") " pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.814848 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlkn8\" (UniqueName: \"kubernetes.io/projected/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-kube-api-access-hlkn8\") pod \"placement-375b-account-create-update-jplg6\" (UID: \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\") " pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.815484 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-operator-scripts\") pod \"placement-375b-account-create-update-jplg6\" (UID: \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\") " pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.816828 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-operator-scripts\") pod \"placement-375b-account-create-update-jplg6\" (UID: \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\") " pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.837592 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlkn8\" (UniqueName: \"kubernetes.io/projected/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-kube-api-access-hlkn8\") pod \"placement-375b-account-create-update-jplg6\" (UID: \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\") " pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.869189 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:15 crc kubenswrapper[5017]: I0129 08:05:15.964007 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:16 crc kubenswrapper[5017]: I0129 08:05:16.117222 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht5bd" event={"ID":"62a2b45e-53f3-4c67-ad22-c1c87017fade","Type":"ContainerStarted","Data":"7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd"} Jan 29 08:05:16 crc kubenswrapper[5017]: I0129 08:05:16.378116 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7m4xr"] Jan 29 08:05:16 crc kubenswrapper[5017]: I0129 08:05:16.487505 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-375b-account-create-update-jplg6"] Jan 29 08:05:16 crc kubenswrapper[5017]: W0129 08:05:16.509528 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c7eebcf_3d82_45d6_9975_75a81fc8dad8.slice/crio-4ee8fbb39563533e06f762617b253d8b625d384fe3af1a87d099785e369ab490 WatchSource:0}: Error finding container 4ee8fbb39563533e06f762617b253d8b625d384fe3af1a87d099785e369ab490: Status 404 returned error can't find the container with id 4ee8fbb39563533e06f762617b253d8b625d384fe3af1a87d099785e369ab490 Jan 29 08:05:17 crc kubenswrapper[5017]: I0129 08:05:17.130068 5017 generic.go:334] "Generic (PLEG): container finished" podID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerID="7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd" exitCode=0 Jan 29 08:05:17 crc kubenswrapper[5017]: I0129 08:05:17.130163 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht5bd" event={"ID":"62a2b45e-53f3-4c67-ad22-c1c87017fade","Type":"ContainerDied","Data":"7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd"} Jan 29 08:05:17 crc kubenswrapper[5017]: I0129 08:05:17.133499 5017 generic.go:334] "Generic (PLEG): container finished" podID="4c7eebcf-3d82-45d6-9975-75a81fc8dad8" containerID="08eb5048b0e3d7a0705b53b6a50ecf6bf794d5dfb65fd1e1a1abded4ec4d6f40" exitCode=0 Jan 29 08:05:17 crc kubenswrapper[5017]: I0129 08:05:17.133605 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-375b-account-create-update-jplg6" event={"ID":"4c7eebcf-3d82-45d6-9975-75a81fc8dad8","Type":"ContainerDied","Data":"08eb5048b0e3d7a0705b53b6a50ecf6bf794d5dfb65fd1e1a1abded4ec4d6f40"} Jan 29 08:05:17 crc kubenswrapper[5017]: I0129 08:05:17.133737 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-375b-account-create-update-jplg6" event={"ID":"4c7eebcf-3d82-45d6-9975-75a81fc8dad8","Type":"ContainerStarted","Data":"4ee8fbb39563533e06f762617b253d8b625d384fe3af1a87d099785e369ab490"} Jan 29 08:05:17 crc kubenswrapper[5017]: I0129 08:05:17.139545 5017 generic.go:334] "Generic (PLEG): container finished" podID="a79553a3-6dd6-42d0-a988-dfec53583ae2" containerID="bb9e5f071bfe8a3c4fa32e0b0b39f9da2fb48c01145525ae4fb5e1d5ba3c5cdb" exitCode=0 Jan 29 08:05:17 crc kubenswrapper[5017]: I0129 08:05:17.139637 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7m4xr" event={"ID":"a79553a3-6dd6-42d0-a988-dfec53583ae2","Type":"ContainerDied","Data":"bb9e5f071bfe8a3c4fa32e0b0b39f9da2fb48c01145525ae4fb5e1d5ba3c5cdb"} Jan 29 08:05:17 crc kubenswrapper[5017]: I0129 08:05:17.139729 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7m4xr" event={"ID":"a79553a3-6dd6-42d0-a988-dfec53583ae2","Type":"ContainerStarted","Data":"ec6cdc525a7f2d3e8f2f231d0126610c2eebac6d5f41c64c9e672df4231bffab"} Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.156185 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht5bd" event={"ID":"62a2b45e-53f3-4c67-ad22-c1c87017fade","Type":"ContainerStarted","Data":"c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff"} Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.187239 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ht5bd" podStartSLOduration=2.755427345 podStartE2EDuration="5.187210949s" podCreationTimestamp="2026-01-29 08:05:13 +0000 UTC" firstStartedPulling="2026-01-29 08:05:15.095631805 +0000 UTC m=+5401.470079415" lastFinishedPulling="2026-01-29 08:05:17.527415379 +0000 UTC m=+5403.901863019" observedRunningTime="2026-01-29 08:05:18.176941123 +0000 UTC m=+5404.551388743" watchObservedRunningTime="2026-01-29 08:05:18.187210949 +0000 UTC m=+5404.561658569" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.575790 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.580181 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-operator-scripts\") pod \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\" (UID: \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\") " Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.580261 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlkn8\" (UniqueName: \"kubernetes.io/projected/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-kube-api-access-hlkn8\") pod \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\" (UID: \"4c7eebcf-3d82-45d6-9975-75a81fc8dad8\") " Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.582069 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c7eebcf-3d82-45d6-9975-75a81fc8dad8" (UID: "4c7eebcf-3d82-45d6-9975-75a81fc8dad8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.586227 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.591312 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-kube-api-access-hlkn8" (OuterVolumeSpecName: "kube-api-access-hlkn8") pod "4c7eebcf-3d82-45d6-9975-75a81fc8dad8" (UID: "4c7eebcf-3d82-45d6-9975-75a81fc8dad8"). InnerVolumeSpecName "kube-api-access-hlkn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.681736 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8wpx\" (UniqueName: \"kubernetes.io/projected/a79553a3-6dd6-42d0-a988-dfec53583ae2-kube-api-access-w8wpx\") pod \"a79553a3-6dd6-42d0-a988-dfec53583ae2\" (UID: \"a79553a3-6dd6-42d0-a988-dfec53583ae2\") " Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.681997 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79553a3-6dd6-42d0-a988-dfec53583ae2-operator-scripts\") pod \"a79553a3-6dd6-42d0-a988-dfec53583ae2\" (UID: \"a79553a3-6dd6-42d0-a988-dfec53583ae2\") " Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.682612 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.682632 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlkn8\" (UniqueName: \"kubernetes.io/projected/4c7eebcf-3d82-45d6-9975-75a81fc8dad8-kube-api-access-hlkn8\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.683406 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79553a3-6dd6-42d0-a988-dfec53583ae2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a79553a3-6dd6-42d0-a988-dfec53583ae2" (UID: "a79553a3-6dd6-42d0-a988-dfec53583ae2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.686500 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79553a3-6dd6-42d0-a988-dfec53583ae2-kube-api-access-w8wpx" (OuterVolumeSpecName: "kube-api-access-w8wpx") pod "a79553a3-6dd6-42d0-a988-dfec53583ae2" (UID: "a79553a3-6dd6-42d0-a988-dfec53583ae2"). InnerVolumeSpecName "kube-api-access-w8wpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.784543 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79553a3-6dd6-42d0-a988-dfec53583ae2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:18 crc kubenswrapper[5017]: I0129 08:05:18.784595 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8wpx\" (UniqueName: \"kubernetes.io/projected/a79553a3-6dd6-42d0-a988-dfec53583ae2-kube-api-access-w8wpx\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:19 crc kubenswrapper[5017]: I0129 08:05:19.173144 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7m4xr" event={"ID":"a79553a3-6dd6-42d0-a988-dfec53583ae2","Type":"ContainerDied","Data":"ec6cdc525a7f2d3e8f2f231d0126610c2eebac6d5f41c64c9e672df4231bffab"} Jan 29 08:05:19 crc kubenswrapper[5017]: I0129 08:05:19.173274 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec6cdc525a7f2d3e8f2f231d0126610c2eebac6d5f41c64c9e672df4231bffab" Jan 29 08:05:19 crc kubenswrapper[5017]: I0129 08:05:19.173179 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7m4xr" Jan 29 08:05:19 crc kubenswrapper[5017]: I0129 08:05:19.176102 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-375b-account-create-update-jplg6" Jan 29 08:05:19 crc kubenswrapper[5017]: I0129 08:05:19.176110 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-375b-account-create-update-jplg6" event={"ID":"4c7eebcf-3d82-45d6-9975-75a81fc8dad8","Type":"ContainerDied","Data":"4ee8fbb39563533e06f762617b253d8b625d384fe3af1a87d099785e369ab490"} Jan 29 08:05:19 crc kubenswrapper[5017]: I0129 08:05:19.176175 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ee8fbb39563533e06f762617b253d8b625d384fe3af1a87d099785e369ab490" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.022358 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bdcbbc8c-cl7gx"] Jan 29 08:05:21 crc kubenswrapper[5017]: E0129 08:05:21.023710 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7eebcf-3d82-45d6-9975-75a81fc8dad8" containerName="mariadb-account-create-update" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.023734 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7eebcf-3d82-45d6-9975-75a81fc8dad8" containerName="mariadb-account-create-update" Jan 29 08:05:21 crc kubenswrapper[5017]: E0129 08:05:21.023819 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79553a3-6dd6-42d0-a988-dfec53583ae2" containerName="mariadb-database-create" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.023830 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79553a3-6dd6-42d0-a988-dfec53583ae2" containerName="mariadb-database-create" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.024161 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79553a3-6dd6-42d0-a988-dfec53583ae2" containerName="mariadb-database-create" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.024203 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c7eebcf-3d82-45d6-9975-75a81fc8dad8" containerName="mariadb-account-create-update" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.034134 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.036520 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2nj\" (UniqueName: \"kubernetes.io/projected/b4033ebc-2592-46c2-bae5-e0ed922e85bc-kube-api-access-kf2nj\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.036719 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-sb\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.036779 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-config\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.036818 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-nb\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.036857 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-dns-svc\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.061441 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bdcbbc8c-cl7gx"] Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.120569 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jtptj"] Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.123660 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.129078 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.129569 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w98d9" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.129739 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141081 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m74c\" (UniqueName: \"kubernetes.io/projected/06ff737a-ae98-4539-a127-3543f2b5e31e-kube-api-access-6m74c\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141128 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-scripts\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141157 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-config-data\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141174 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ff737a-ae98-4539-a127-3543f2b5e31e-logs\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141208 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-sb\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141260 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-config\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141317 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-combined-ca-bundle\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141369 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-nb\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141404 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-dns-svc\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.141479 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2nj\" (UniqueName: \"kubernetes.io/projected/b4033ebc-2592-46c2-bae5-e0ed922e85bc-kube-api-access-kf2nj\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.143729 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-nb\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.145207 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jtptj"] Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.147516 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-sb\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.149491 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-dns-svc\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.149510 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-config\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.188803 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2nj\" (UniqueName: \"kubernetes.io/projected/b4033ebc-2592-46c2-bae5-e0ed922e85bc-kube-api-access-kf2nj\") pod \"dnsmasq-dns-79bdcbbc8c-cl7gx\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.243332 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m74c\" (UniqueName: \"kubernetes.io/projected/06ff737a-ae98-4539-a127-3543f2b5e31e-kube-api-access-6m74c\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.243400 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-scripts\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.243432 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-config-data\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.243455 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ff737a-ae98-4539-a127-3543f2b5e31e-logs\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.243508 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-combined-ca-bundle\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.244538 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ff737a-ae98-4539-a127-3543f2b5e31e-logs\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.248255 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-combined-ca-bundle\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.248651 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-config-data\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.251894 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-scripts\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.268760 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m74c\" (UniqueName: \"kubernetes.io/projected/06ff737a-ae98-4539-a127-3543f2b5e31e-kube-api-access-6m74c\") pod \"placement-db-sync-jtptj\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.398564 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.459872 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:21 crc kubenswrapper[5017]: I0129 08:05:21.891555 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bdcbbc8c-cl7gx"] Jan 29 08:05:21 crc kubenswrapper[5017]: W0129 08:05:21.899225 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4033ebc_2592_46c2_bae5_e0ed922e85bc.slice/crio-2a088402ae2fed7539e4468a6e5d96e99e66f86c34f3033e9f85fac6d63ce88f WatchSource:0}: Error finding container 2a088402ae2fed7539e4468a6e5d96e99e66f86c34f3033e9f85fac6d63ce88f: Status 404 returned error can't find the container with id 2a088402ae2fed7539e4468a6e5d96e99e66f86c34f3033e9f85fac6d63ce88f Jan 29 08:05:22 crc kubenswrapper[5017]: I0129 08:05:22.023314 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jtptj"] Jan 29 08:05:22 crc kubenswrapper[5017]: W0129 08:05:22.025831 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06ff737a_ae98_4539_a127_3543f2b5e31e.slice/crio-8407d44aa6aa6788340fdc67ec61e2a1bb16be03df046b1e86a4a9463b003541 WatchSource:0}: Error finding container 8407d44aa6aa6788340fdc67ec61e2a1bb16be03df046b1e86a4a9463b003541: Status 404 returned error can't find the container with id 8407d44aa6aa6788340fdc67ec61e2a1bb16be03df046b1e86a4a9463b003541 Jan 29 08:05:22 crc kubenswrapper[5017]: I0129 08:05:22.217924 5017 generic.go:334] "Generic (PLEG): container finished" podID="b4033ebc-2592-46c2-bae5-e0ed922e85bc" containerID="d5b952f26db971c66a61d7c380e6115a17242a0283b9efd05331830e338ca682" exitCode=0 Jan 29 08:05:22 crc kubenswrapper[5017]: I0129 08:05:22.218018 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" event={"ID":"b4033ebc-2592-46c2-bae5-e0ed922e85bc","Type":"ContainerDied","Data":"d5b952f26db971c66a61d7c380e6115a17242a0283b9efd05331830e338ca682"} Jan 29 08:05:22 crc kubenswrapper[5017]: I0129 08:05:22.218542 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" event={"ID":"b4033ebc-2592-46c2-bae5-e0ed922e85bc","Type":"ContainerStarted","Data":"2a088402ae2fed7539e4468a6e5d96e99e66f86c34f3033e9f85fac6d63ce88f"} Jan 29 08:05:22 crc kubenswrapper[5017]: I0129 08:05:22.221456 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jtptj" event={"ID":"06ff737a-ae98-4539-a127-3543f2b5e31e","Type":"ContainerStarted","Data":"8407d44aa6aa6788340fdc67ec61e2a1bb16be03df046b1e86a4a9463b003541"} Jan 29 08:05:23 crc kubenswrapper[5017]: I0129 08:05:23.234745 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" event={"ID":"b4033ebc-2592-46c2-bae5-e0ed922e85bc","Type":"ContainerStarted","Data":"c75d6eb225024f79242044e5a63a39d57c546d3a4e9edc7e4d920fc4a410fc6f"} Jan 29 08:05:23 crc kubenswrapper[5017]: I0129 08:05:23.235748 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:23 crc kubenswrapper[5017]: I0129 08:05:23.240845 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jtptj" event={"ID":"06ff737a-ae98-4539-a127-3543f2b5e31e","Type":"ContainerStarted","Data":"9f918704b5310072bfa2c0af3192198402a4a13ffd89cfc0b731b5f1a509bc57"} Jan 29 08:05:23 crc kubenswrapper[5017]: I0129 08:05:23.259222 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" podStartSLOduration=3.259195441 podStartE2EDuration="3.259195441s" podCreationTimestamp="2026-01-29 08:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:05:23.254043337 +0000 UTC m=+5409.628490967" watchObservedRunningTime="2026-01-29 08:05:23.259195441 +0000 UTC m=+5409.633643051" Jan 29 08:05:23 crc kubenswrapper[5017]: I0129 08:05:23.275833 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jtptj" podStartSLOduration=2.27580339 podStartE2EDuration="2.27580339s" podCreationTimestamp="2026-01-29 08:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:05:23.270746208 +0000 UTC m=+5409.645193818" watchObservedRunningTime="2026-01-29 08:05:23.27580339 +0000 UTC m=+5409.650251000" Jan 29 08:05:23 crc kubenswrapper[5017]: I0129 08:05:23.458923 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:23 crc kubenswrapper[5017]: I0129 08:05:23.460268 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:23 crc kubenswrapper[5017]: I0129 08:05:23.750921 5017 scope.go:117] "RemoveContainer" containerID="26803d0137462548da50798083b8c3cc8fe0a816bdc6c92d494171e76650bf0d" Jan 29 08:05:23 crc kubenswrapper[5017]: I0129 08:05:23.779848 5017 scope.go:117] "RemoveContainer" containerID="a0741905ab8c241d32170ce94719ce5472071a9695bf7c0176377964e99a094f" Jan 29 08:05:24 crc kubenswrapper[5017]: I0129 08:05:24.255576 5017 generic.go:334] "Generic (PLEG): container finished" podID="06ff737a-ae98-4539-a127-3543f2b5e31e" containerID="9f918704b5310072bfa2c0af3192198402a4a13ffd89cfc0b731b5f1a509bc57" exitCode=0 Jan 29 08:05:24 crc kubenswrapper[5017]: I0129 08:05:24.255664 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jtptj" event={"ID":"06ff737a-ae98-4539-a127-3543f2b5e31e","Type":"ContainerDied","Data":"9f918704b5310072bfa2c0af3192198402a4a13ffd89cfc0b731b5f1a509bc57"} Jan 29 08:05:24 crc kubenswrapper[5017]: I0129 08:05:24.519751 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ht5bd" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerName="registry-server" probeResult="failure" output=< Jan 29 08:05:24 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:05:24 crc kubenswrapper[5017]: > Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.669820 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.855710 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ff737a-ae98-4539-a127-3543f2b5e31e-logs\") pod \"06ff737a-ae98-4539-a127-3543f2b5e31e\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.855859 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-config-data\") pod \"06ff737a-ae98-4539-a127-3543f2b5e31e\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.856032 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-scripts\") pod \"06ff737a-ae98-4539-a127-3543f2b5e31e\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.856075 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m74c\" (UniqueName: \"kubernetes.io/projected/06ff737a-ae98-4539-a127-3543f2b5e31e-kube-api-access-6m74c\") pod \"06ff737a-ae98-4539-a127-3543f2b5e31e\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.856110 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-combined-ca-bundle\") pod \"06ff737a-ae98-4539-a127-3543f2b5e31e\" (UID: \"06ff737a-ae98-4539-a127-3543f2b5e31e\") " Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.856418 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06ff737a-ae98-4539-a127-3543f2b5e31e-logs" (OuterVolumeSpecName: "logs") pod "06ff737a-ae98-4539-a127-3543f2b5e31e" (UID: "06ff737a-ae98-4539-a127-3543f2b5e31e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.856797 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ff737a-ae98-4539-a127-3543f2b5e31e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.864609 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-scripts" (OuterVolumeSpecName: "scripts") pod "06ff737a-ae98-4539-a127-3543f2b5e31e" (UID: "06ff737a-ae98-4539-a127-3543f2b5e31e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.864716 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ff737a-ae98-4539-a127-3543f2b5e31e-kube-api-access-6m74c" (OuterVolumeSpecName: "kube-api-access-6m74c") pod "06ff737a-ae98-4539-a127-3543f2b5e31e" (UID: "06ff737a-ae98-4539-a127-3543f2b5e31e"). InnerVolumeSpecName "kube-api-access-6m74c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.885477 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06ff737a-ae98-4539-a127-3543f2b5e31e" (UID: "06ff737a-ae98-4539-a127-3543f2b5e31e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.886320 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-config-data" (OuterVolumeSpecName: "config-data") pod "06ff737a-ae98-4539-a127-3543f2b5e31e" (UID: "06ff737a-ae98-4539-a127-3543f2b5e31e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.958877 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.958934 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.958947 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m74c\" (UniqueName: \"kubernetes.io/projected/06ff737a-ae98-4539-a127-3543f2b5e31e-kube-api-access-6m74c\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:25 crc kubenswrapper[5017]: I0129 08:05:25.958984 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ff737a-ae98-4539-a127-3543f2b5e31e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.276202 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jtptj" event={"ID":"06ff737a-ae98-4539-a127-3543f2b5e31e","Type":"ContainerDied","Data":"8407d44aa6aa6788340fdc67ec61e2a1bb16be03df046b1e86a4a9463b003541"} Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.277196 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8407d44aa6aa6788340fdc67ec61e2a1bb16be03df046b1e86a4a9463b003541" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.276362 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jtptj" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.477825 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c755b4dd6-c5f4t"] Jan 29 08:05:26 crc kubenswrapper[5017]: E0129 08:05:26.478437 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ff737a-ae98-4539-a127-3543f2b5e31e" containerName="placement-db-sync" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.478468 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ff737a-ae98-4539-a127-3543f2b5e31e" containerName="placement-db-sync" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.478773 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ff737a-ae98-4539-a127-3543f2b5e31e" containerName="placement-db-sync" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.480355 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.482303 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w98d9" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.482747 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.483641 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.505794 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c755b4dd6-c5f4t"] Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.572824 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac7c779a-6c7f-4f09-abe0-a42882712730-config-data\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.572884 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac7c779a-6c7f-4f09-abe0-a42882712730-logs\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.572925 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48szp\" (UniqueName: \"kubernetes.io/projected/ac7c779a-6c7f-4f09-abe0-a42882712730-kube-api-access-48szp\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.573309 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7c779a-6c7f-4f09-abe0-a42882712730-combined-ca-bundle\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.573672 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac7c779a-6c7f-4f09-abe0-a42882712730-scripts\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.676309 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac7c779a-6c7f-4f09-abe0-a42882712730-config-data\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.676365 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac7c779a-6c7f-4f09-abe0-a42882712730-logs\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.676395 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48szp\" (UniqueName: \"kubernetes.io/projected/ac7c779a-6c7f-4f09-abe0-a42882712730-kube-api-access-48szp\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.676432 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7c779a-6c7f-4f09-abe0-a42882712730-combined-ca-bundle\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.676470 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac7c779a-6c7f-4f09-abe0-a42882712730-scripts\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.677091 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac7c779a-6c7f-4f09-abe0-a42882712730-logs\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.683414 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac7c779a-6c7f-4f09-abe0-a42882712730-scripts\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.685596 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac7c779a-6c7f-4f09-abe0-a42882712730-combined-ca-bundle\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.685995 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac7c779a-6c7f-4f09-abe0-a42882712730-config-data\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.695863 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48szp\" (UniqueName: \"kubernetes.io/projected/ac7c779a-6c7f-4f09-abe0-a42882712730-kube-api-access-48szp\") pod \"placement-c755b4dd6-c5f4t\" (UID: \"ac7c779a-6c7f-4f09-abe0-a42882712730\") " pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:26 crc kubenswrapper[5017]: I0129 08:05:26.803071 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:27 crc kubenswrapper[5017]: I0129 08:05:27.299063 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c755b4dd6-c5f4t"] Jan 29 08:05:28 crc kubenswrapper[5017]: I0129 08:05:28.303681 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c755b4dd6-c5f4t" event={"ID":"ac7c779a-6c7f-4f09-abe0-a42882712730","Type":"ContainerStarted","Data":"bc4a7e9f1c476da5e934f40f1a9b692167ca8c393e8ba376b1da8335fa03a281"} Jan 29 08:05:28 crc kubenswrapper[5017]: I0129 08:05:28.306071 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:28 crc kubenswrapper[5017]: I0129 08:05:28.306107 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:28 crc kubenswrapper[5017]: I0129 08:05:28.306121 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c755b4dd6-c5f4t" event={"ID":"ac7c779a-6c7f-4f09-abe0-a42882712730","Type":"ContainerStarted","Data":"11abe5ffbb1064b41ab294227c6af1921ac74a0f972c46e0aab4cc0334fe1245"} Jan 29 08:05:28 crc kubenswrapper[5017]: I0129 08:05:28.306143 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c755b4dd6-c5f4t" event={"ID":"ac7c779a-6c7f-4f09-abe0-a42882712730","Type":"ContainerStarted","Data":"4bdefcc119763c8ff199a2a0e62117148c857b65ab5c9be98cd6995c5a85434b"} Jan 29 08:05:28 crc kubenswrapper[5017]: I0129 08:05:28.335463 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c755b4dd6-c5f4t" podStartSLOduration=2.335437565 podStartE2EDuration="2.335437565s" podCreationTimestamp="2026-01-29 08:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:05:28.331125321 +0000 UTC m=+5414.705573151" watchObservedRunningTime="2026-01-29 08:05:28.335437565 +0000 UTC m=+5414.709885175" Jan 29 08:05:31 crc kubenswrapper[5017]: I0129 08:05:31.400235 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:05:31 crc kubenswrapper[5017]: I0129 08:05:31.479800 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c74747687-njvq6"] Jan 29 08:05:31 crc kubenswrapper[5017]: I0129 08:05:31.480138 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c74747687-njvq6" podUID="5284dd1a-96ec-439f-a7f0-27df8a4cd656" containerName="dnsmasq-dns" containerID="cri-o://ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40" gracePeriod=10 Jan 29 08:05:31 crc kubenswrapper[5017]: I0129 08:05:31.994309 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.120697 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp8lc\" (UniqueName: \"kubernetes.io/projected/5284dd1a-96ec-439f-a7f0-27df8a4cd656-kube-api-access-rp8lc\") pod \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.120810 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-sb\") pod \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.120878 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-dns-svc\") pod \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.120900 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-config\") pod \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.121007 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-nb\") pod \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\" (UID: \"5284dd1a-96ec-439f-a7f0-27df8a4cd656\") " Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.128435 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5284dd1a-96ec-439f-a7f0-27df8a4cd656-kube-api-access-rp8lc" (OuterVolumeSpecName: "kube-api-access-rp8lc") pod "5284dd1a-96ec-439f-a7f0-27df8a4cd656" (UID: "5284dd1a-96ec-439f-a7f0-27df8a4cd656"). InnerVolumeSpecName "kube-api-access-rp8lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.166225 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5284dd1a-96ec-439f-a7f0-27df8a4cd656" (UID: "5284dd1a-96ec-439f-a7f0-27df8a4cd656"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.166254 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5284dd1a-96ec-439f-a7f0-27df8a4cd656" (UID: "5284dd1a-96ec-439f-a7f0-27df8a4cd656"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.167108 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-config" (OuterVolumeSpecName: "config") pod "5284dd1a-96ec-439f-a7f0-27df8a4cd656" (UID: "5284dd1a-96ec-439f-a7f0-27df8a4cd656"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.167895 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5284dd1a-96ec-439f-a7f0-27df8a4cd656" (UID: "5284dd1a-96ec-439f-a7f0-27df8a4cd656"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.223537 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp8lc\" (UniqueName: \"kubernetes.io/projected/5284dd1a-96ec-439f-a7f0-27df8a4cd656-kube-api-access-rp8lc\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.223596 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.223648 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.223661 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.223671 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5284dd1a-96ec-439f-a7f0-27df8a4cd656-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.346502 5017 generic.go:334] "Generic (PLEG): container finished" podID="5284dd1a-96ec-439f-a7f0-27df8a4cd656" containerID="ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40" exitCode=0 Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.346574 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c74747687-njvq6" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.346584 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c74747687-njvq6" event={"ID":"5284dd1a-96ec-439f-a7f0-27df8a4cd656","Type":"ContainerDied","Data":"ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40"} Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.346712 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c74747687-njvq6" event={"ID":"5284dd1a-96ec-439f-a7f0-27df8a4cd656","Type":"ContainerDied","Data":"6a2034ef5cf1d731f175527c036936a12fb038cc79a111f90dd4ef5f1d00b76b"} Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.346781 5017 scope.go:117] "RemoveContainer" containerID="ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.381439 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c74747687-njvq6"] Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.382463 5017 scope.go:117] "RemoveContainer" containerID="a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.392561 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c74747687-njvq6"] Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.402790 5017 scope.go:117] "RemoveContainer" containerID="ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40" Jan 29 08:05:32 crc kubenswrapper[5017]: E0129 08:05:32.404121 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40\": container with ID starting with ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40 not found: ID does not exist" containerID="ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.404178 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40"} err="failed to get container status \"ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40\": rpc error: code = NotFound desc = could not find container \"ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40\": container with ID starting with ae9ff78e01547deab561284c84ff72c42142622e775e1e96f494d5c951288d40 not found: ID does not exist" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.404218 5017 scope.go:117] "RemoveContainer" containerID="a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683" Jan 29 08:05:32 crc kubenswrapper[5017]: E0129 08:05:32.404865 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683\": container with ID starting with a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683 not found: ID does not exist" containerID="a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683" Jan 29 08:05:32 crc kubenswrapper[5017]: I0129 08:05:32.404922 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683"} err="failed to get container status \"a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683\": rpc error: code = NotFound desc = could not find container \"a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683\": container with ID starting with a3f44aa9e3b368a30df0071c059f008e20be4843769ae4c59c5d2dc3c087a683 not found: ID does not exist" Jan 29 08:05:33 crc kubenswrapper[5017]: I0129 08:05:33.508627 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:33 crc kubenswrapper[5017]: I0129 08:05:33.583770 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:33 crc kubenswrapper[5017]: I0129 08:05:33.750994 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ht5bd"] Jan 29 08:05:34 crc kubenswrapper[5017]: I0129 08:05:34.332081 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5284dd1a-96ec-439f-a7f0-27df8a4cd656" path="/var/lib/kubelet/pods/5284dd1a-96ec-439f-a7f0-27df8a4cd656/volumes" Jan 29 08:05:35 crc kubenswrapper[5017]: I0129 08:05:35.378840 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ht5bd" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerName="registry-server" containerID="cri-o://c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff" gracePeriod=2 Jan 29 08:05:35 crc kubenswrapper[5017]: I0129 08:05:35.857990 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.006838 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-utilities\") pod \"62a2b45e-53f3-4c67-ad22-c1c87017fade\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.007029 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2mk8\" (UniqueName: \"kubernetes.io/projected/62a2b45e-53f3-4c67-ad22-c1c87017fade-kube-api-access-h2mk8\") pod \"62a2b45e-53f3-4c67-ad22-c1c87017fade\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.007063 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-catalog-content\") pod \"62a2b45e-53f3-4c67-ad22-c1c87017fade\" (UID: \"62a2b45e-53f3-4c67-ad22-c1c87017fade\") " Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.008151 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-utilities" (OuterVolumeSpecName: "utilities") pod "62a2b45e-53f3-4c67-ad22-c1c87017fade" (UID: "62a2b45e-53f3-4c67-ad22-c1c87017fade"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.018164 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a2b45e-53f3-4c67-ad22-c1c87017fade-kube-api-access-h2mk8" (OuterVolumeSpecName: "kube-api-access-h2mk8") pod "62a2b45e-53f3-4c67-ad22-c1c87017fade" (UID: "62a2b45e-53f3-4c67-ad22-c1c87017fade"). InnerVolumeSpecName "kube-api-access-h2mk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.109733 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2mk8\" (UniqueName: \"kubernetes.io/projected/62a2b45e-53f3-4c67-ad22-c1c87017fade-kube-api-access-h2mk8\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.109770 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.132054 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62a2b45e-53f3-4c67-ad22-c1c87017fade" (UID: "62a2b45e-53f3-4c67-ad22-c1c87017fade"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.213950 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a2b45e-53f3-4c67-ad22-c1c87017fade-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.398279 5017 generic.go:334] "Generic (PLEG): container finished" podID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerID="c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff" exitCode=0 Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.398444 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ht5bd" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.398463 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht5bd" event={"ID":"62a2b45e-53f3-4c67-ad22-c1c87017fade","Type":"ContainerDied","Data":"c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff"} Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.398559 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ht5bd" event={"ID":"62a2b45e-53f3-4c67-ad22-c1c87017fade","Type":"ContainerDied","Data":"701f3bbbeb70184b5c174844dccb8781ff891d275800b796a1407a18ac7d4c2b"} Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.398909 5017 scope.go:117] "RemoveContainer" containerID="c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.439691 5017 scope.go:117] "RemoveContainer" containerID="7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.451024 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ht5bd"] Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.460522 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ht5bd"] Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.470742 5017 scope.go:117] "RemoveContainer" containerID="0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.508501 5017 scope.go:117] "RemoveContainer" containerID="c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff" Jan 29 08:05:36 crc kubenswrapper[5017]: E0129 08:05:36.509261 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff\": container with ID starting with c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff not found: ID does not exist" containerID="c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.509318 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff"} err="failed to get container status \"c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff\": rpc error: code = NotFound desc = could not find container \"c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff\": container with ID starting with c93420469d9952d63c03bd6b1c04bda6fdd7e512b39a2879d1f9c085172310ff not found: ID does not exist" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.509347 5017 scope.go:117] "RemoveContainer" containerID="7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd" Jan 29 08:05:36 crc kubenswrapper[5017]: E0129 08:05:36.509856 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd\": container with ID starting with 7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd not found: ID does not exist" containerID="7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.509892 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd"} err="failed to get container status \"7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd\": rpc error: code = NotFound desc = could not find container \"7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd\": container with ID starting with 7e8aa9201d0584dcbe8db355beee9828c7074e1d56f52754d0c642aea4f4b1dd not found: ID does not exist" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.509908 5017 scope.go:117] "RemoveContainer" containerID="0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d" Jan 29 08:05:36 crc kubenswrapper[5017]: E0129 08:05:36.510342 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d\": container with ID starting with 0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d not found: ID does not exist" containerID="0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d" Jan 29 08:05:36 crc kubenswrapper[5017]: I0129 08:05:36.510380 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d"} err="failed to get container status \"0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d\": rpc error: code = NotFound desc = could not find container \"0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d\": container with ID starting with 0ed0af496246a3160f931e8b2cd09b153e4e12c85c8ee676081504956200139d not found: ID does not exist" Jan 29 08:05:38 crc kubenswrapper[5017]: I0129 08:05:38.334770 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" path="/var/lib/kubelet/pods/62a2b45e-53f3-4c67-ad22-c1c87017fade/volumes" Jan 29 08:05:56 crc kubenswrapper[5017]: I0129 08:05:56.539769 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:05:56 crc kubenswrapper[5017]: I0129 08:05:56.540784 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:05:57 crc kubenswrapper[5017]: I0129 08:05:57.931161 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:05:57 crc kubenswrapper[5017]: I0129 08:05:57.935984 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c755b4dd6-c5f4t" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.663194 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-v4l25"] Jan 29 08:06:22 crc kubenswrapper[5017]: E0129 08:06:22.664326 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerName="extract-content" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.664360 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerName="extract-content" Jan 29 08:06:22 crc kubenswrapper[5017]: E0129 08:06:22.664389 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerName="extract-utilities" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.664398 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerName="extract-utilities" Jan 29 08:06:22 crc kubenswrapper[5017]: E0129 08:06:22.664414 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerName="registry-server" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.664420 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerName="registry-server" Jan 29 08:06:22 crc kubenswrapper[5017]: E0129 08:06:22.664429 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5284dd1a-96ec-439f-a7f0-27df8a4cd656" containerName="dnsmasq-dns" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.664435 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5284dd1a-96ec-439f-a7f0-27df8a4cd656" containerName="dnsmasq-dns" Jan 29 08:06:22 crc kubenswrapper[5017]: E0129 08:06:22.664446 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5284dd1a-96ec-439f-a7f0-27df8a4cd656" containerName="init" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.664455 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5284dd1a-96ec-439f-a7f0-27df8a4cd656" containerName="init" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.664635 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="5284dd1a-96ec-439f-a7f0-27df8a4cd656" containerName="dnsmasq-dns" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.664645 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a2b45e-53f3-4c67-ad22-c1c87017fade" containerName="registry-server" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.665351 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.677052 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-v4l25"] Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.738171 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf9kx\" (UniqueName: \"kubernetes.io/projected/d6489a54-ddc7-4aec-9a44-1230030f9481-kube-api-access-gf9kx\") pod \"nova-api-db-create-v4l25\" (UID: \"d6489a54-ddc7-4aec-9a44-1230030f9481\") " pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.738292 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6489a54-ddc7-4aec-9a44-1230030f9481-operator-scripts\") pod \"nova-api-db-create-v4l25\" (UID: \"d6489a54-ddc7-4aec-9a44-1230030f9481\") " pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.744655 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7qnrn"] Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.746182 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.773794 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9099-account-create-update-xsb2t"] Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.775991 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.784444 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.788867 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7qnrn"] Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.804108 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9099-account-create-update-xsb2t"] Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.850982 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2grld\" (UniqueName: \"kubernetes.io/projected/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-kube-api-access-2grld\") pod \"nova-api-9099-account-create-update-xsb2t\" (UID: \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\") " pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.851115 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9pwl\" (UniqueName: \"kubernetes.io/projected/9387069d-a63d-4c3a-8eca-ec58392dbc4f-kube-api-access-w9pwl\") pod \"nova-cell0-db-create-7qnrn\" (UID: \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\") " pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.851286 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf9kx\" (UniqueName: \"kubernetes.io/projected/d6489a54-ddc7-4aec-9a44-1230030f9481-kube-api-access-gf9kx\") pod \"nova-api-db-create-v4l25\" (UID: \"d6489a54-ddc7-4aec-9a44-1230030f9481\") " pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.851371 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6489a54-ddc7-4aec-9a44-1230030f9481-operator-scripts\") pod \"nova-api-db-create-v4l25\" (UID: \"d6489a54-ddc7-4aec-9a44-1230030f9481\") " pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.851411 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387069d-a63d-4c3a-8eca-ec58392dbc4f-operator-scripts\") pod \"nova-cell0-db-create-7qnrn\" (UID: \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\") " pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.851538 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-operator-scripts\") pod \"nova-api-9099-account-create-update-xsb2t\" (UID: \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\") " pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.853027 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6489a54-ddc7-4aec-9a44-1230030f9481-operator-scripts\") pod \"nova-api-db-create-v4l25\" (UID: \"d6489a54-ddc7-4aec-9a44-1230030f9481\") " pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.895737 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-c9sv7"] Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.899157 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.923437 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf9kx\" (UniqueName: \"kubernetes.io/projected/d6489a54-ddc7-4aec-9a44-1230030f9481-kube-api-access-gf9kx\") pod \"nova-api-db-create-v4l25\" (UID: \"d6489a54-ddc7-4aec-9a44-1230030f9481\") " pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.967521 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4gbl\" (UniqueName: \"kubernetes.io/projected/2347b806-7dec-460c-b4bc-aa0d3610d919-kube-api-access-q4gbl\") pod \"nova-cell1-db-create-c9sv7\" (UID: \"2347b806-7dec-460c-b4bc-aa0d3610d919\") " pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.967597 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387069d-a63d-4c3a-8eca-ec58392dbc4f-operator-scripts\") pod \"nova-cell0-db-create-7qnrn\" (UID: \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\") " pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.967656 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-operator-scripts\") pod \"nova-api-9099-account-create-update-xsb2t\" (UID: \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\") " pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.967747 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2grld\" (UniqueName: \"kubernetes.io/projected/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-kube-api-access-2grld\") pod \"nova-api-9099-account-create-update-xsb2t\" (UID: \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\") " pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.967766 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2347b806-7dec-460c-b4bc-aa0d3610d919-operator-scripts\") pod \"nova-cell1-db-create-c9sv7\" (UID: \"2347b806-7dec-460c-b4bc-aa0d3610d919\") " pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.967804 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9pwl\" (UniqueName: \"kubernetes.io/projected/9387069d-a63d-4c3a-8eca-ec58392dbc4f-kube-api-access-w9pwl\") pod \"nova-cell0-db-create-7qnrn\" (UID: \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\") " pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.969818 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387069d-a63d-4c3a-8eca-ec58392dbc4f-operator-scripts\") pod \"nova-cell0-db-create-7qnrn\" (UID: \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\") " pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.969980 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c9sv7"] Jan 29 08:06:22 crc kubenswrapper[5017]: I0129 08:06:22.972109 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-operator-scripts\") pod \"nova-api-9099-account-create-update-xsb2t\" (UID: \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\") " pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.001466 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-68ed-account-create-update-flqjd"] Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.003015 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2grld\" (UniqueName: \"kubernetes.io/projected/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-kube-api-access-2grld\") pod \"nova-api-9099-account-create-update-xsb2t\" (UID: \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\") " pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.003546 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.007644 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.011086 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9pwl\" (UniqueName: \"kubernetes.io/projected/9387069d-a63d-4c3a-8eca-ec58392dbc4f-kube-api-access-w9pwl\") pod \"nova-cell0-db-create-7qnrn\" (UID: \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\") " pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.011141 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.044691 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-68ed-account-create-update-flqjd"] Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.069450 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2347b806-7dec-460c-b4bc-aa0d3610d919-operator-scripts\") pod \"nova-cell1-db-create-c9sv7\" (UID: \"2347b806-7dec-460c-b4bc-aa0d3610d919\") " pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.069566 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4gbl\" (UniqueName: \"kubernetes.io/projected/2347b806-7dec-460c-b4bc-aa0d3610d919-kube-api-access-q4gbl\") pod \"nova-cell1-db-create-c9sv7\" (UID: \"2347b806-7dec-460c-b4bc-aa0d3610d919\") " pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.070244 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.070542 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2347b806-7dec-460c-b4bc-aa0d3610d919-operator-scripts\") pod \"nova-cell1-db-create-c9sv7\" (UID: \"2347b806-7dec-460c-b4bc-aa0d3610d919\") " pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.091612 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4gbl\" (UniqueName: \"kubernetes.io/projected/2347b806-7dec-460c-b4bc-aa0d3610d919-kube-api-access-q4gbl\") pod \"nova-cell1-db-create-c9sv7\" (UID: \"2347b806-7dec-460c-b4bc-aa0d3610d919\") " pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.101090 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.180041 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfxjg\" (UniqueName: \"kubernetes.io/projected/51de0e5f-ded1-478a-aeeb-ace024d4e989-kube-api-access-wfxjg\") pod \"nova-cell0-68ed-account-create-update-flqjd\" (UID: \"51de0e5f-ded1-478a-aeeb-ace024d4e989\") " pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.180201 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51de0e5f-ded1-478a-aeeb-ace024d4e989-operator-scripts\") pod \"nova-cell0-68ed-account-create-update-flqjd\" (UID: \"51de0e5f-ded1-478a-aeeb-ace024d4e989\") " pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.183052 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0b9e-account-create-update-qccg5"] Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.185044 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.189098 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.206485 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0b9e-account-create-update-qccg5"] Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.283775 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfxjg\" (UniqueName: \"kubernetes.io/projected/51de0e5f-ded1-478a-aeeb-ace024d4e989-kube-api-access-wfxjg\") pod \"nova-cell0-68ed-account-create-update-flqjd\" (UID: \"51de0e5f-ded1-478a-aeeb-ace024d4e989\") " pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.283883 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51de0e5f-ded1-478a-aeeb-ace024d4e989-operator-scripts\") pod \"nova-cell0-68ed-account-create-update-flqjd\" (UID: \"51de0e5f-ded1-478a-aeeb-ace024d4e989\") " pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.285041 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51de0e5f-ded1-478a-aeeb-ace024d4e989-operator-scripts\") pod \"nova-cell0-68ed-account-create-update-flqjd\" (UID: \"51de0e5f-ded1-478a-aeeb-ace024d4e989\") " pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.288211 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.304824 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfxjg\" (UniqueName: \"kubernetes.io/projected/51de0e5f-ded1-478a-aeeb-ace024d4e989-kube-api-access-wfxjg\") pod \"nova-cell0-68ed-account-create-update-flqjd\" (UID: \"51de0e5f-ded1-478a-aeeb-ace024d4e989\") " pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.385475 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fd9878-0286-43b5-a979-72c8c4a4ef4a-operator-scripts\") pod \"nova-cell1-0b9e-account-create-update-qccg5\" (UID: \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\") " pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.387037 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzx2\" (UniqueName: \"kubernetes.io/projected/36fd9878-0286-43b5-a979-72c8c4a4ef4a-kube-api-access-zrzx2\") pod \"nova-cell1-0b9e-account-create-update-qccg5\" (UID: \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\") " pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.489543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fd9878-0286-43b5-a979-72c8c4a4ef4a-operator-scripts\") pod \"nova-cell1-0b9e-account-create-update-qccg5\" (UID: \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\") " pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.489616 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrzx2\" (UniqueName: \"kubernetes.io/projected/36fd9878-0286-43b5-a979-72c8c4a4ef4a-kube-api-access-zrzx2\") pod \"nova-cell1-0b9e-account-create-update-qccg5\" (UID: \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\") " pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.490616 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.491852 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fd9878-0286-43b5-a979-72c8c4a4ef4a-operator-scripts\") pod \"nova-cell1-0b9e-account-create-update-qccg5\" (UID: \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\") " pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.512660 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrzx2\" (UniqueName: \"kubernetes.io/projected/36fd9878-0286-43b5-a979-72c8c4a4ef4a-kube-api-access-zrzx2\") pod \"nova-cell1-0b9e-account-create-update-qccg5\" (UID: \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\") " pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.521642 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.619089 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-v4l25"] Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.677842 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c9sv7"] Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.769486 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7qnrn"] Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.783926 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9099-account-create-update-xsb2t"] Jan 29 08:06:23 crc kubenswrapper[5017]: I0129 08:06:23.833810 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-68ed-account-create-update-flqjd"] Jan 29 08:06:23 crc kubenswrapper[5017]: W0129 08:06:23.877213 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51de0e5f_ded1_478a_aeeb_ace024d4e989.slice/crio-1483cca40da8c16271b1e6b8aa6b92851c37051e549a8194a2617db17e04f859 WatchSource:0}: Error finding container 1483cca40da8c16271b1e6b8aa6b92851c37051e549a8194a2617db17e04f859: Status 404 returned error can't find the container with id 1483cca40da8c16271b1e6b8aa6b92851c37051e549a8194a2617db17e04f859 Jan 29 08:06:24 crc kubenswrapper[5017]: I0129 08:06:24.001332 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9099-account-create-update-xsb2t" event={"ID":"cf9739d1-fc25-47b0-ab03-fe97aa0f4450","Type":"ContainerStarted","Data":"f9e43341797e7d79c881af193fcc36fa1f11dbb64392c0af25adaa53dabdc995"} Jan 29 08:06:24 crc kubenswrapper[5017]: I0129 08:06:24.003220 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7qnrn" event={"ID":"9387069d-a63d-4c3a-8eca-ec58392dbc4f","Type":"ContainerStarted","Data":"2f50f466a9333e39db750051a735cef3b460b1b825ff12feebcd227e55af437c"} Jan 29 08:06:24 crc kubenswrapper[5017]: I0129 08:06:24.005159 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v4l25" event={"ID":"d6489a54-ddc7-4aec-9a44-1230030f9481","Type":"ContainerStarted","Data":"963278024b402400ecac71127df61e12f1cbb3581f17e9ec8143518a976b103e"} Jan 29 08:06:24 crc kubenswrapper[5017]: I0129 08:06:24.005235 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v4l25" event={"ID":"d6489a54-ddc7-4aec-9a44-1230030f9481","Type":"ContainerStarted","Data":"28d4a6f0262eefa278fabe2d0514b8fb20bf7a7f3bf61f10369f3289bec6f407"} Jan 29 08:06:24 crc kubenswrapper[5017]: I0129 08:06:24.008384 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-68ed-account-create-update-flqjd" event={"ID":"51de0e5f-ded1-478a-aeeb-ace024d4e989","Type":"ContainerStarted","Data":"1483cca40da8c16271b1e6b8aa6b92851c37051e549a8194a2617db17e04f859"} Jan 29 08:06:24 crc kubenswrapper[5017]: I0129 08:06:24.009946 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c9sv7" event={"ID":"2347b806-7dec-460c-b4bc-aa0d3610d919","Type":"ContainerStarted","Data":"a21b4a42ef4e7c66b2d92df3913bb9b86c126aca823e761dfdf805640a9abf45"} Jan 29 08:06:24 crc kubenswrapper[5017]: I0129 08:06:24.026590 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-v4l25" podStartSLOduration=2.026565605 podStartE2EDuration="2.026565605s" podCreationTimestamp="2026-01-29 08:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:24.024544666 +0000 UTC m=+5470.398992276" watchObservedRunningTime="2026-01-29 08:06:24.026565605 +0000 UTC m=+5470.401013215" Jan 29 08:06:24 crc kubenswrapper[5017]: I0129 08:06:24.131435 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0b9e-account-create-update-qccg5"] Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.021484 5017 generic.go:334] "Generic (PLEG): container finished" podID="51de0e5f-ded1-478a-aeeb-ace024d4e989" containerID="b5c24ed1eb0847aa7b57d86e669db3e50b586126b0755812de3bf05fbfc9166b" exitCode=0 Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.021548 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-68ed-account-create-update-flqjd" event={"ID":"51de0e5f-ded1-478a-aeeb-ace024d4e989","Type":"ContainerDied","Data":"b5c24ed1eb0847aa7b57d86e669db3e50b586126b0755812de3bf05fbfc9166b"} Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.024198 5017 generic.go:334] "Generic (PLEG): container finished" podID="36fd9878-0286-43b5-a979-72c8c4a4ef4a" containerID="4d34f4a24875a6ba96c5b2f047a613fbe8db3198d78ec6bde6d0f867ba063ef5" exitCode=0 Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.024258 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" event={"ID":"36fd9878-0286-43b5-a979-72c8c4a4ef4a","Type":"ContainerDied","Data":"4d34f4a24875a6ba96c5b2f047a613fbe8db3198d78ec6bde6d0f867ba063ef5"} Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.024827 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" event={"ID":"36fd9878-0286-43b5-a979-72c8c4a4ef4a","Type":"ContainerStarted","Data":"1d780f9cb6193f4b93eb0cedcbc6656fdf2316a644958702806399139c35b212"} Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.026235 5017 generic.go:334] "Generic (PLEG): container finished" podID="2347b806-7dec-460c-b4bc-aa0d3610d919" containerID="de5885b812adb71f54e4a354a41a99de98154a421b3ca4af7edd43cafa3b610a" exitCode=0 Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.026373 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c9sv7" event={"ID":"2347b806-7dec-460c-b4bc-aa0d3610d919","Type":"ContainerDied","Data":"de5885b812adb71f54e4a354a41a99de98154a421b3ca4af7edd43cafa3b610a"} Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.027921 5017 generic.go:334] "Generic (PLEG): container finished" podID="cf9739d1-fc25-47b0-ab03-fe97aa0f4450" containerID="0afd2fc41daf290e3ee6037a685a4991454a4c5ab6cafb1cd0eafda127f6ecd1" exitCode=0 Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.028040 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9099-account-create-update-xsb2t" event={"ID":"cf9739d1-fc25-47b0-ab03-fe97aa0f4450","Type":"ContainerDied","Data":"0afd2fc41daf290e3ee6037a685a4991454a4c5ab6cafb1cd0eafda127f6ecd1"} Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.030092 5017 generic.go:334] "Generic (PLEG): container finished" podID="9387069d-a63d-4c3a-8eca-ec58392dbc4f" containerID="e158ce8600919676fcf3d9f970a0145bfa2d4d34d1217e6614f74c1436018d7a" exitCode=0 Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.030168 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7qnrn" event={"ID":"9387069d-a63d-4c3a-8eca-ec58392dbc4f","Type":"ContainerDied","Data":"e158ce8600919676fcf3d9f970a0145bfa2d4d34d1217e6614f74c1436018d7a"} Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.032073 5017 generic.go:334] "Generic (PLEG): container finished" podID="d6489a54-ddc7-4aec-9a44-1230030f9481" containerID="963278024b402400ecac71127df61e12f1cbb3581f17e9ec8143518a976b103e" exitCode=0 Jan 29 08:06:25 crc kubenswrapper[5017]: I0129 08:06:25.032126 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v4l25" event={"ID":"d6489a54-ddc7-4aec-9a44-1230030f9481","Type":"ContainerDied","Data":"963278024b402400ecac71127df61e12f1cbb3581f17e9ec8143518a976b103e"} Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.418499 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.445638 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.539318 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.539430 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.566080 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2grld\" (UniqueName: \"kubernetes.io/projected/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-kube-api-access-2grld\") pod \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\" (UID: \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.566168 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrzx2\" (UniqueName: \"kubernetes.io/projected/36fd9878-0286-43b5-a979-72c8c4a4ef4a-kube-api-access-zrzx2\") pod \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\" (UID: \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.566283 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fd9878-0286-43b5-a979-72c8c4a4ef4a-operator-scripts\") pod \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\" (UID: \"36fd9878-0286-43b5-a979-72c8c4a4ef4a\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.566347 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-operator-scripts\") pod \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\" (UID: \"cf9739d1-fc25-47b0-ab03-fe97aa0f4450\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.567280 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf9739d1-fc25-47b0-ab03-fe97aa0f4450" (UID: "cf9739d1-fc25-47b0-ab03-fe97aa0f4450"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.568178 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36fd9878-0286-43b5-a979-72c8c4a4ef4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36fd9878-0286-43b5-a979-72c8c4a4ef4a" (UID: "36fd9878-0286-43b5-a979-72c8c4a4ef4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.574146 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36fd9878-0286-43b5-a979-72c8c4a4ef4a-kube-api-access-zrzx2" (OuterVolumeSpecName: "kube-api-access-zrzx2") pod "36fd9878-0286-43b5-a979-72c8c4a4ef4a" (UID: "36fd9878-0286-43b5-a979-72c8c4a4ef4a"). InnerVolumeSpecName "kube-api-access-zrzx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.574429 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-kube-api-access-2grld" (OuterVolumeSpecName: "kube-api-access-2grld") pod "cf9739d1-fc25-47b0-ab03-fe97aa0f4450" (UID: "cf9739d1-fc25-47b0-ab03-fe97aa0f4450"). InnerVolumeSpecName "kube-api-access-2grld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.666641 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.668548 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2grld\" (UniqueName: \"kubernetes.io/projected/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-kube-api-access-2grld\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.668592 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrzx2\" (UniqueName: \"kubernetes.io/projected/36fd9878-0286-43b5-a979-72c8c4a4ef4a-kube-api-access-zrzx2\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.668604 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36fd9878-0286-43b5-a979-72c8c4a4ef4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.668615 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf9739d1-fc25-47b0-ab03-fe97aa0f4450-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.673408 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.684029 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.701534 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.769260 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfxjg\" (UniqueName: \"kubernetes.io/projected/51de0e5f-ded1-478a-aeeb-ace024d4e989-kube-api-access-wfxjg\") pod \"51de0e5f-ded1-478a-aeeb-ace024d4e989\" (UID: \"51de0e5f-ded1-478a-aeeb-ace024d4e989\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.769340 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51de0e5f-ded1-478a-aeeb-ace024d4e989-operator-scripts\") pod \"51de0e5f-ded1-478a-aeeb-ace024d4e989\" (UID: \"51de0e5f-ded1-478a-aeeb-ace024d4e989\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.769376 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4gbl\" (UniqueName: \"kubernetes.io/projected/2347b806-7dec-460c-b4bc-aa0d3610d919-kube-api-access-q4gbl\") pod \"2347b806-7dec-460c-b4bc-aa0d3610d919\" (UID: \"2347b806-7dec-460c-b4bc-aa0d3610d919\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.769920 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2347b806-7dec-460c-b4bc-aa0d3610d919-operator-scripts\") pod \"2347b806-7dec-460c-b4bc-aa0d3610d919\" (UID: \"2347b806-7dec-460c-b4bc-aa0d3610d919\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.770191 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51de0e5f-ded1-478a-aeeb-ace024d4e989-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51de0e5f-ded1-478a-aeeb-ace024d4e989" (UID: "51de0e5f-ded1-478a-aeeb-ace024d4e989"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.770610 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51de0e5f-ded1-478a-aeeb-ace024d4e989-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.770786 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2347b806-7dec-460c-b4bc-aa0d3610d919-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2347b806-7dec-460c-b4bc-aa0d3610d919" (UID: "2347b806-7dec-460c-b4bc-aa0d3610d919"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.773394 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51de0e5f-ded1-478a-aeeb-ace024d4e989-kube-api-access-wfxjg" (OuterVolumeSpecName: "kube-api-access-wfxjg") pod "51de0e5f-ded1-478a-aeeb-ace024d4e989" (UID: "51de0e5f-ded1-478a-aeeb-ace024d4e989"). InnerVolumeSpecName "kube-api-access-wfxjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.774157 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2347b806-7dec-460c-b4bc-aa0d3610d919-kube-api-access-q4gbl" (OuterVolumeSpecName: "kube-api-access-q4gbl") pod "2347b806-7dec-460c-b4bc-aa0d3610d919" (UID: "2347b806-7dec-460c-b4bc-aa0d3610d919"). InnerVolumeSpecName "kube-api-access-q4gbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.872007 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387069d-a63d-4c3a-8eca-ec58392dbc4f-operator-scripts\") pod \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\" (UID: \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.872172 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf9kx\" (UniqueName: \"kubernetes.io/projected/d6489a54-ddc7-4aec-9a44-1230030f9481-kube-api-access-gf9kx\") pod \"d6489a54-ddc7-4aec-9a44-1230030f9481\" (UID: \"d6489a54-ddc7-4aec-9a44-1230030f9481\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.872324 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6489a54-ddc7-4aec-9a44-1230030f9481-operator-scripts\") pod \"d6489a54-ddc7-4aec-9a44-1230030f9481\" (UID: \"d6489a54-ddc7-4aec-9a44-1230030f9481\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.872725 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387069d-a63d-4c3a-8eca-ec58392dbc4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9387069d-a63d-4c3a-8eca-ec58392dbc4f" (UID: "9387069d-a63d-4c3a-8eca-ec58392dbc4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.872887 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6489a54-ddc7-4aec-9a44-1230030f9481-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6489a54-ddc7-4aec-9a44-1230030f9481" (UID: "d6489a54-ddc7-4aec-9a44-1230030f9481"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.873030 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9pwl\" (UniqueName: \"kubernetes.io/projected/9387069d-a63d-4c3a-8eca-ec58392dbc4f-kube-api-access-w9pwl\") pod \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\" (UID: \"9387069d-a63d-4c3a-8eca-ec58392dbc4f\") " Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.873759 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfxjg\" (UniqueName: \"kubernetes.io/projected/51de0e5f-ded1-478a-aeeb-ace024d4e989-kube-api-access-wfxjg\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.873781 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6489a54-ddc7-4aec-9a44-1230030f9481-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.873793 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4gbl\" (UniqueName: \"kubernetes.io/projected/2347b806-7dec-460c-b4bc-aa0d3610d919-kube-api-access-q4gbl\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.873804 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387069d-a63d-4c3a-8eca-ec58392dbc4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.873813 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2347b806-7dec-460c-b4bc-aa0d3610d919-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.875537 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6489a54-ddc7-4aec-9a44-1230030f9481-kube-api-access-gf9kx" (OuterVolumeSpecName: "kube-api-access-gf9kx") pod "d6489a54-ddc7-4aec-9a44-1230030f9481" (UID: "d6489a54-ddc7-4aec-9a44-1230030f9481"). InnerVolumeSpecName "kube-api-access-gf9kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.876396 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9387069d-a63d-4c3a-8eca-ec58392dbc4f-kube-api-access-w9pwl" (OuterVolumeSpecName: "kube-api-access-w9pwl") pod "9387069d-a63d-4c3a-8eca-ec58392dbc4f" (UID: "9387069d-a63d-4c3a-8eca-ec58392dbc4f"). InnerVolumeSpecName "kube-api-access-w9pwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.975902 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf9kx\" (UniqueName: \"kubernetes.io/projected/d6489a54-ddc7-4aec-9a44-1230030f9481-kube-api-access-gf9kx\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:26 crc kubenswrapper[5017]: I0129 08:06:26.975939 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9pwl\" (UniqueName: \"kubernetes.io/projected/9387069d-a63d-4c3a-8eca-ec58392dbc4f-kube-api-access-w9pwl\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.052874 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" event={"ID":"36fd9878-0286-43b5-a979-72c8c4a4ef4a","Type":"ContainerDied","Data":"1d780f9cb6193f4b93eb0cedcbc6656fdf2316a644958702806399139c35b212"} Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.053074 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d780f9cb6193f4b93eb0cedcbc6656fdf2316a644958702806399139c35b212" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.053465 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b9e-account-create-update-qccg5" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.061754 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c9sv7" event={"ID":"2347b806-7dec-460c-b4bc-aa0d3610d919","Type":"ContainerDied","Data":"a21b4a42ef4e7c66b2d92df3913bb9b86c126aca823e761dfdf805640a9abf45"} Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.061840 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a21b4a42ef4e7c66b2d92df3913bb9b86c126aca823e761dfdf805640a9abf45" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.061775 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c9sv7" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.071527 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9099-account-create-update-xsb2t" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.071828 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9099-account-create-update-xsb2t" event={"ID":"cf9739d1-fc25-47b0-ab03-fe97aa0f4450","Type":"ContainerDied","Data":"f9e43341797e7d79c881af193fcc36fa1f11dbb64392c0af25adaa53dabdc995"} Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.072148 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e43341797e7d79c881af193fcc36fa1f11dbb64392c0af25adaa53dabdc995" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.077321 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7qnrn" event={"ID":"9387069d-a63d-4c3a-8eca-ec58392dbc4f","Type":"ContainerDied","Data":"2f50f466a9333e39db750051a735cef3b460b1b825ff12feebcd227e55af437c"} Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.077382 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f50f466a9333e39db750051a735cef3b460b1b825ff12feebcd227e55af437c" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.077386 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7qnrn" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.089149 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v4l25" event={"ID":"d6489a54-ddc7-4aec-9a44-1230030f9481","Type":"ContainerDied","Data":"28d4a6f0262eefa278fabe2d0514b8fb20bf7a7f3bf61f10369f3289bec6f407"} Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.089228 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d4a6f0262eefa278fabe2d0514b8fb20bf7a7f3bf61f10369f3289bec6f407" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.089174 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v4l25" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.100319 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-68ed-account-create-update-flqjd" event={"ID":"51de0e5f-ded1-478a-aeeb-ace024d4e989","Type":"ContainerDied","Data":"1483cca40da8c16271b1e6b8aa6b92851c37051e549a8194a2617db17e04f859"} Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.100391 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1483cca40da8c16271b1e6b8aa6b92851c37051e549a8194a2617db17e04f859" Jan 29 08:06:27 crc kubenswrapper[5017]: I0129 08:06:27.100528 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-68ed-account-create-update-flqjd" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.162568 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gstsq"] Jan 29 08:06:28 crc kubenswrapper[5017]: E0129 08:06:28.163460 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2347b806-7dec-460c-b4bc-aa0d3610d919" containerName="mariadb-database-create" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163479 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2347b806-7dec-460c-b4bc-aa0d3610d919" containerName="mariadb-database-create" Jan 29 08:06:28 crc kubenswrapper[5017]: E0129 08:06:28.163494 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51de0e5f-ded1-478a-aeeb-ace024d4e989" containerName="mariadb-account-create-update" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163504 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="51de0e5f-ded1-478a-aeeb-ace024d4e989" containerName="mariadb-account-create-update" Jan 29 08:06:28 crc kubenswrapper[5017]: E0129 08:06:28.163524 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9387069d-a63d-4c3a-8eca-ec58392dbc4f" containerName="mariadb-database-create" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163533 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9387069d-a63d-4c3a-8eca-ec58392dbc4f" containerName="mariadb-database-create" Jan 29 08:06:28 crc kubenswrapper[5017]: E0129 08:06:28.163553 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6489a54-ddc7-4aec-9a44-1230030f9481" containerName="mariadb-database-create" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163562 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6489a54-ddc7-4aec-9a44-1230030f9481" containerName="mariadb-database-create" Jan 29 08:06:28 crc kubenswrapper[5017]: E0129 08:06:28.163577 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36fd9878-0286-43b5-a979-72c8c4a4ef4a" containerName="mariadb-account-create-update" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163586 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fd9878-0286-43b5-a979-72c8c4a4ef4a" containerName="mariadb-account-create-update" Jan 29 08:06:28 crc kubenswrapper[5017]: E0129 08:06:28.163601 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf9739d1-fc25-47b0-ab03-fe97aa0f4450" containerName="mariadb-account-create-update" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163610 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf9739d1-fc25-47b0-ab03-fe97aa0f4450" containerName="mariadb-account-create-update" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163848 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf9739d1-fc25-47b0-ab03-fe97aa0f4450" containerName="mariadb-account-create-update" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163889 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="2347b806-7dec-460c-b4bc-aa0d3610d919" containerName="mariadb-database-create" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163899 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9387069d-a63d-4c3a-8eca-ec58392dbc4f" containerName="mariadb-database-create" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163913 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="51de0e5f-ded1-478a-aeeb-ace024d4e989" containerName="mariadb-account-create-update" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163928 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6489a54-ddc7-4aec-9a44-1230030f9481" containerName="mariadb-database-create" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.163940 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="36fd9878-0286-43b5-a979-72c8c4a4ef4a" containerName="mariadb-account-create-update" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.164779 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.167854 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.167923 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-94sjn" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.168038 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.191166 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gstsq"] Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.301242 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.301304 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-config-data\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.301454 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-scripts\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.301911 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnhw\" (UniqueName: \"kubernetes.io/projected/4209978a-668e-40a2-80da-5dbd9b790e94-kube-api-access-qgnhw\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.403445 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnhw\" (UniqueName: \"kubernetes.io/projected/4209978a-668e-40a2-80da-5dbd9b790e94-kube-api-access-qgnhw\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.403561 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.403602 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-config-data\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.403624 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-scripts\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.410543 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-config-data\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.411109 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.412402 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-scripts\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.429619 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnhw\" (UniqueName: \"kubernetes.io/projected/4209978a-668e-40a2-80da-5dbd9b790e94-kube-api-access-qgnhw\") pod \"nova-cell0-conductor-db-sync-gstsq\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.485305 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:28 crc kubenswrapper[5017]: I0129 08:06:28.748070 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gstsq"] Jan 29 08:06:28 crc kubenswrapper[5017]: W0129 08:06:28.753604 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4209978a_668e_40a2_80da_5dbd9b790e94.slice/crio-bf835362b1ee8b2251689d53fbd1699a62474139fea15abec315c2f8ad43e136 WatchSource:0}: Error finding container bf835362b1ee8b2251689d53fbd1699a62474139fea15abec315c2f8ad43e136: Status 404 returned error can't find the container with id bf835362b1ee8b2251689d53fbd1699a62474139fea15abec315c2f8ad43e136 Jan 29 08:06:29 crc kubenswrapper[5017]: I0129 08:06:29.122071 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gstsq" event={"ID":"4209978a-668e-40a2-80da-5dbd9b790e94","Type":"ContainerStarted","Data":"e4258c445c5125c41dc2ccb7d359954856dc2e770320b3a9017df8ea7caaacf4"} Jan 29 08:06:29 crc kubenswrapper[5017]: I0129 08:06:29.122612 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gstsq" event={"ID":"4209978a-668e-40a2-80da-5dbd9b790e94","Type":"ContainerStarted","Data":"bf835362b1ee8b2251689d53fbd1699a62474139fea15abec315c2f8ad43e136"} Jan 29 08:06:29 crc kubenswrapper[5017]: I0129 08:06:29.141399 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gstsq" podStartSLOduration=1.141367595 podStartE2EDuration="1.141367595s" podCreationTimestamp="2026-01-29 08:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:29.14079056 +0000 UTC m=+5475.515238190" watchObservedRunningTime="2026-01-29 08:06:29.141367595 +0000 UTC m=+5475.515815205" Jan 29 08:06:35 crc kubenswrapper[5017]: I0129 08:06:35.200511 5017 generic.go:334] "Generic (PLEG): container finished" podID="4209978a-668e-40a2-80da-5dbd9b790e94" containerID="e4258c445c5125c41dc2ccb7d359954856dc2e770320b3a9017df8ea7caaacf4" exitCode=0 Jan 29 08:06:35 crc kubenswrapper[5017]: I0129 08:06:35.201555 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gstsq" event={"ID":"4209978a-668e-40a2-80da-5dbd9b790e94","Type":"ContainerDied","Data":"e4258c445c5125c41dc2ccb7d359954856dc2e770320b3a9017df8ea7caaacf4"} Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.563296 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.691009 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-scripts\") pod \"4209978a-668e-40a2-80da-5dbd9b790e94\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.691308 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgnhw\" (UniqueName: \"kubernetes.io/projected/4209978a-668e-40a2-80da-5dbd9b790e94-kube-api-access-qgnhw\") pod \"4209978a-668e-40a2-80da-5dbd9b790e94\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.691368 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-combined-ca-bundle\") pod \"4209978a-668e-40a2-80da-5dbd9b790e94\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.691562 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-config-data\") pod \"4209978a-668e-40a2-80da-5dbd9b790e94\" (UID: \"4209978a-668e-40a2-80da-5dbd9b790e94\") " Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.699299 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4209978a-668e-40a2-80da-5dbd9b790e94-kube-api-access-qgnhw" (OuterVolumeSpecName: "kube-api-access-qgnhw") pod "4209978a-668e-40a2-80da-5dbd9b790e94" (UID: "4209978a-668e-40a2-80da-5dbd9b790e94"). InnerVolumeSpecName "kube-api-access-qgnhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.699686 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-scripts" (OuterVolumeSpecName: "scripts") pod "4209978a-668e-40a2-80da-5dbd9b790e94" (UID: "4209978a-668e-40a2-80da-5dbd9b790e94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.721669 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4209978a-668e-40a2-80da-5dbd9b790e94" (UID: "4209978a-668e-40a2-80da-5dbd9b790e94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.722312 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-config-data" (OuterVolumeSpecName: "config-data") pod "4209978a-668e-40a2-80da-5dbd9b790e94" (UID: "4209978a-668e-40a2-80da-5dbd9b790e94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.793721 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.793772 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.793790 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgnhw\" (UniqueName: \"kubernetes.io/projected/4209978a-668e-40a2-80da-5dbd9b790e94-kube-api-access-qgnhw\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:36 crc kubenswrapper[5017]: I0129 08:06:36.793801 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209978a-668e-40a2-80da-5dbd9b790e94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.223757 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gstsq" event={"ID":"4209978a-668e-40a2-80da-5dbd9b790e94","Type":"ContainerDied","Data":"bf835362b1ee8b2251689d53fbd1699a62474139fea15abec315c2f8ad43e136"} Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.224829 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf835362b1ee8b2251689d53fbd1699a62474139fea15abec315c2f8ad43e136" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.224450 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gstsq" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.334222 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:06:37 crc kubenswrapper[5017]: E0129 08:06:37.334580 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4209978a-668e-40a2-80da-5dbd9b790e94" containerName="nova-cell0-conductor-db-sync" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.334598 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4209978a-668e-40a2-80da-5dbd9b790e94" containerName="nova-cell0-conductor-db-sync" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.334869 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4209978a-668e-40a2-80da-5dbd9b790e94" containerName="nova-cell0-conductor-db-sync" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.341430 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.349224 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.349269 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-94sjn" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.350170 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.515585 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqpkz\" (UniqueName: \"kubernetes.io/projected/6aa9495b-d470-41ce-b861-c410fc4e8aaf-kube-api-access-dqpkz\") pod \"nova-cell0-conductor-0\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.515692 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.515846 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.618378 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.618488 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqpkz\" (UniqueName: \"kubernetes.io/projected/6aa9495b-d470-41ce-b861-c410fc4e8aaf-kube-api-access-dqpkz\") pod \"nova-cell0-conductor-0\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.618543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.625053 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.627038 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.647122 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqpkz\" (UniqueName: \"kubernetes.io/projected/6aa9495b-d470-41ce-b861-c410fc4e8aaf-kube-api-access-dqpkz\") pod \"nova-cell0-conductor-0\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:37 crc kubenswrapper[5017]: I0129 08:06:37.660089 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:38 crc kubenswrapper[5017]: I0129 08:06:38.108213 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:06:38 crc kubenswrapper[5017]: I0129 08:06:38.236575 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6aa9495b-d470-41ce-b861-c410fc4e8aaf","Type":"ContainerStarted","Data":"88105743287daaeaf51858372699da35148395ea781beeacd8d1c1fe5e829d6a"} Jan 29 08:06:39 crc kubenswrapper[5017]: I0129 08:06:39.250266 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6aa9495b-d470-41ce-b861-c410fc4e8aaf","Type":"ContainerStarted","Data":"d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985"} Jan 29 08:06:39 crc kubenswrapper[5017]: I0129 08:06:39.250442 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:39 crc kubenswrapper[5017]: I0129 08:06:39.280365 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.280339438 podStartE2EDuration="2.280339438s" podCreationTimestamp="2026-01-29 08:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:39.272851378 +0000 UTC m=+5485.647299008" watchObservedRunningTime="2026-01-29 08:06:39.280339438 +0000 UTC m=+5485.654787058" Jan 29 08:06:47 crc kubenswrapper[5017]: I0129 08:06:47.687124 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.142312 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pld5z"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.144290 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.152536 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.152884 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.163511 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pld5z"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.257351 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.257443 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-scripts\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.257480 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxlw\" (UniqueName: \"kubernetes.io/projected/04718292-6393-48d0-9428-127033978d5a-kube-api-access-zhxlw\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.257535 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-config-data\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.282046 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.287453 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.308263 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.315850 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.357633 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.359231 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.359719 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhxlw\" (UniqueName: \"kubernetes.io/projected/04718292-6393-48d0-9428-127033978d5a-kube-api-access-zhxlw\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.359818 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.359858 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-config-data\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.359928 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvwh5\" (UniqueName: \"kubernetes.io/projected/6464cf43-5d37-42ba-b987-79124757db7d-kube-api-access-pvwh5\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.360041 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.360076 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6464cf43-5d37-42ba-b987-79124757db7d-logs\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.360097 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-config-data\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.360129 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-scripts\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.363679 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.367194 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.385810 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-scripts\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.386394 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.397944 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhxlw\" (UniqueName: \"kubernetes.io/projected/04718292-6393-48d0-9428-127033978d5a-kube-api-access-zhxlw\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.398457 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-config-data\") pod \"nova-cell0-cell-mapping-pld5z\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.461475 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-config-data\") pod \"nova-scheduler-0\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.461597 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.461622 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.461750 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvwh5\" (UniqueName: \"kubernetes.io/projected/6464cf43-5d37-42ba-b987-79124757db7d-kube-api-access-pvwh5\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.461830 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-config-data\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.461849 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6464cf43-5d37-42ba-b987-79124757db7d-logs\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.461878 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg7z8\" (UniqueName: \"kubernetes.io/projected/2096ff6e-e6b6-444c-9f4f-c7737907d58a-kube-api-access-hg7z8\") pod \"nova-scheduler-0\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.466127 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6464cf43-5d37-42ba-b987-79124757db7d-logs\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.467701 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.479634 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.483094 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.484482 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-config-data\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.487842 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.492577 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.512164 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvwh5\" (UniqueName: \"kubernetes.io/projected/6464cf43-5d37-42ba-b987-79124757db7d-kube-api-access-pvwh5\") pod \"nova-api-0\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.515074 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.554316 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.556360 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.558515 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.563945 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.564081 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.564108 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmj6h\" (UniqueName: \"kubernetes.io/projected/6e102c94-f461-4486-b5d9-a304b48eaad2-kube-api-access-kmj6h\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.564135 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.564211 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg7z8\" (UniqueName: \"kubernetes.io/projected/2096ff6e-e6b6-444c-9f4f-c7737907d58a-kube-api-access-hg7z8\") pod \"nova-scheduler-0\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.564255 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-config-data\") pod \"nova-scheduler-0\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.572796 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.575720 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-config-data\") pod \"nova-scheduler-0\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.593716 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.621716 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.627611 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg7z8\" (UniqueName: \"kubernetes.io/projected/2096ff6e-e6b6-444c-9f4f-c7737907d58a-kube-api-access-hg7z8\") pod \"nova-scheduler-0\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.669352 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850df674-c042-4147-b2dc-1dbae9c7a4b4-logs\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.669721 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.669848 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.669940 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmj6h\" (UniqueName: \"kubernetes.io/projected/6e102c94-f461-4486-b5d9-a304b48eaad2-kube-api-access-kmj6h\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.670044 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.670250 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbmlp\" (UniqueName: \"kubernetes.io/projected/850df674-c042-4147-b2dc-1dbae9c7a4b4-kube-api-access-cbmlp\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.670359 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-config-data\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.679100 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.681203 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.700829 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.721804 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmj6h\" (UniqueName: \"kubernetes.io/projected/6e102c94-f461-4486-b5d9-a304b48eaad2-kube-api-access-kmj6h\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.775331 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850df674-c042-4147-b2dc-1dbae9c7a4b4-logs\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.775396 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.775536 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbmlp\" (UniqueName: \"kubernetes.io/projected/850df674-c042-4147-b2dc-1dbae9c7a4b4-kube-api-access-cbmlp\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.775570 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-config-data\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.778305 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850df674-c042-4147-b2dc-1dbae9c7a4b4-logs\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.783451 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.786418 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-config-data\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.827501 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbmlp\" (UniqueName: \"kubernetes.io/projected/850df674-c042-4147-b2dc-1dbae9c7a4b4-kube-api-access-cbmlp\") pod \"nova-metadata-0\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " pod="openstack/nova-metadata-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.829409 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f46f5c5cf-27wvj"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.831270 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.851695 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f46f5c5cf-27wvj"] Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.879067 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-sb\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.879133 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxshv\" (UniqueName: \"kubernetes.io/projected/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-kube-api-access-zxshv\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.879217 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-nb\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.879269 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-dns-svc\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.879301 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-config\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.973931 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.985512 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-nb\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.985605 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-dns-svc\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.985640 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-config\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.985681 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-sb\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.985716 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxshv\" (UniqueName: \"kubernetes.io/projected/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-kube-api-access-zxshv\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.987196 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-nb\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.987863 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-dns-svc\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.993415 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-config\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:48 crc kubenswrapper[5017]: I0129 08:06:48.994079 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-sb\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.007659 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.048071 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxshv\" (UniqueName: \"kubernetes.io/projected/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-kube-api-access-zxshv\") pod \"dnsmasq-dns-6f46f5c5cf-27wvj\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.252818 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.352825 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.515981 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pld5z"] Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.779652 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:06:49 crc kubenswrapper[5017]: W0129 08:06:49.791866 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod850df674_c042_4147_b2dc_1dbae9c7a4b4.slice/crio-ad8f9864082151bd55bdc03fac34770ed24e2c1ea9fd767263acbf15f109a91d WatchSource:0}: Error finding container ad8f9864082151bd55bdc03fac34770ed24e2c1ea9fd767263acbf15f109a91d: Status 404 returned error can't find the container with id ad8f9864082151bd55bdc03fac34770ed24e2c1ea9fd767263acbf15f109a91d Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.843676 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.889189 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbgqf"] Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.907287 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.919143 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbgqf"] Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.924412 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.925027 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 08:06:49 crc kubenswrapper[5017]: I0129 08:06:49.940125 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.080344 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.080454 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-config-data\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.080495 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-scripts\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.080535 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4g62\" (UniqueName: \"kubernetes.io/projected/4ec216ce-93c5-4f31-8b8d-c05cfe023664-kube-api-access-k4g62\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.182069 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-config-data\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.182529 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-scripts\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.182572 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4g62\" (UniqueName: \"kubernetes.io/projected/4ec216ce-93c5-4f31-8b8d-c05cfe023664-kube-api-access-k4g62\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.182661 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.191289 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.192904 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-scripts\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.197261 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-config-data\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.199272 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f46f5c5cf-27wvj"] Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.216389 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4g62\" (UniqueName: \"kubernetes.io/projected/4ec216ce-93c5-4f31-8b8d-c05cfe023664-kube-api-access-k4g62\") pod \"nova-cell1-conductor-db-sync-fbgqf\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.241373 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.424843 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"850df674-c042-4147-b2dc-1dbae9c7a4b4","Type":"ContainerStarted","Data":"8df70c4953fe6975a7d8f573c6c03b09eeaa1c2304361e954562917d262e208c"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.424915 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"850df674-c042-4147-b2dc-1dbae9c7a4b4","Type":"ContainerStarted","Data":"c9230020d8602b482b4a2f29fb3b352b2400e8e7148e8489821b16fdd17341cc"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.424934 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"850df674-c042-4147-b2dc-1dbae9c7a4b4","Type":"ContainerStarted","Data":"ad8f9864082151bd55bdc03fac34770ed24e2c1ea9fd767263acbf15f109a91d"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.443934 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" event={"ID":"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa","Type":"ContainerStarted","Data":"fae7c2935e94aa1cf2a3bd2e0cff279432da0dcc06770c9cbe8e42067d670b0f"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.510256 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2096ff6e-e6b6-444c-9f4f-c7737907d58a","Type":"ContainerStarted","Data":"63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.510331 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2096ff6e-e6b6-444c-9f4f-c7737907d58a","Type":"ContainerStarted","Data":"d223eccdc908c4bb1e6d71b80d32172f34443881ff9b0bab10c0aaa22882dcef"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.532121 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6464cf43-5d37-42ba-b987-79124757db7d","Type":"ContainerStarted","Data":"94fdaad3f32460a8669a46b8a00b925329cb4966f3ba9f2bc08ea19ea01945eb"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.532206 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6464cf43-5d37-42ba-b987-79124757db7d","Type":"ContainerStarted","Data":"b17854deb09661993fb86d7108702e5e928dbf49abb132cc8f3f4e803ed1df33"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.532223 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6464cf43-5d37-42ba-b987-79124757db7d","Type":"ContainerStarted","Data":"ab0f1e35975464d4be4af84503e9ad198d81855b1efd55919685842e894717ec"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.537196 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.537137119 podStartE2EDuration="2.537137119s" podCreationTimestamp="2026-01-29 08:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:50.482928078 +0000 UTC m=+5496.857375688" watchObservedRunningTime="2026-01-29 08:06:50.537137119 +0000 UTC m=+5496.911584749" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.555025 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pld5z" event={"ID":"04718292-6393-48d0-9428-127033978d5a","Type":"ContainerStarted","Data":"0f5621f628677a551cda6250355e03dfb283863c189f11b417fe65d8d99430f4"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.555095 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pld5z" event={"ID":"04718292-6393-48d0-9428-127033978d5a","Type":"ContainerStarted","Data":"d4ad68a5fde9b34ca2f85aa4e80c64f33790e487c6c923a48b58892e3ba950d6"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.590211 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.590168883 podStartE2EDuration="2.590168883s" podCreationTimestamp="2026-01-29 08:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:50.531104575 +0000 UTC m=+5496.905552195" watchObservedRunningTime="2026-01-29 08:06:50.590168883 +0000 UTC m=+5496.964616503" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.611478 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e102c94-f461-4486-b5d9-a304b48eaad2","Type":"ContainerStarted","Data":"7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.611533 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e102c94-f461-4486-b5d9-a304b48eaad2","Type":"ContainerStarted","Data":"fc079f775234845fdbfb31042ef04e5a6cc738548431de3444504ad35ef433b4"} Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.622434 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.622396096 podStartE2EDuration="2.622396096s" podCreationTimestamp="2026-01-29 08:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:50.593279617 +0000 UTC m=+5496.967727237" watchObservedRunningTime="2026-01-29 08:06:50.622396096 +0000 UTC m=+5496.996843706" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.648903 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pld5z" podStartSLOduration=2.648866662 podStartE2EDuration="2.648866662s" podCreationTimestamp="2026-01-29 08:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:50.636564527 +0000 UTC m=+5497.011012157" watchObservedRunningTime="2026-01-29 08:06:50.648866662 +0000 UTC m=+5497.023314272" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.678813 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.6787798 podStartE2EDuration="2.6787798s" podCreationTimestamp="2026-01-29 08:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:50.662621613 +0000 UTC m=+5497.037069223" watchObservedRunningTime="2026-01-29 08:06:50.6787798 +0000 UTC m=+5497.053227410" Jan 29 08:06:50 crc kubenswrapper[5017]: I0129 08:06:50.879917 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbgqf"] Jan 29 08:06:51 crc kubenswrapper[5017]: I0129 08:06:51.638591 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fbgqf" event={"ID":"4ec216ce-93c5-4f31-8b8d-c05cfe023664","Type":"ContainerStarted","Data":"43fcc94a563999101b63549864c3c447ba0b9c05460eb5b950f5bb3200509006"} Jan 29 08:06:51 crc kubenswrapper[5017]: I0129 08:06:51.639278 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fbgqf" event={"ID":"4ec216ce-93c5-4f31-8b8d-c05cfe023664","Type":"ContainerStarted","Data":"93da70afcd02f6953cca07e5616ec18669b16c7d81520e9132c1d2fb39e14267"} Jan 29 08:06:51 crc kubenswrapper[5017]: I0129 08:06:51.648927 5017 generic.go:334] "Generic (PLEG): container finished" podID="0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" containerID="533cf16622fbdf3b777c06941708592fcf3f3b4741ffca78a2aec42cb3737f8c" exitCode=0 Jan 29 08:06:51 crc kubenswrapper[5017]: I0129 08:06:51.653945 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" event={"ID":"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa","Type":"ContainerDied","Data":"533cf16622fbdf3b777c06941708592fcf3f3b4741ffca78a2aec42cb3737f8c"} Jan 29 08:06:51 crc kubenswrapper[5017]: I0129 08:06:51.676477 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fbgqf" podStartSLOduration=2.676441833 podStartE2EDuration="2.676441833s" podCreationTimestamp="2026-01-29 08:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:51.661592236 +0000 UTC m=+5498.036039846" watchObservedRunningTime="2026-01-29 08:06:51.676441833 +0000 UTC m=+5498.050889443" Jan 29 08:06:52 crc kubenswrapper[5017]: I0129 08:06:52.663834 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" event={"ID":"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa","Type":"ContainerStarted","Data":"641318345e9528276f2a62e8175252dd2f086dd4ddec6d7aa5b185053721d90e"} Jan 29 08:06:52 crc kubenswrapper[5017]: I0129 08:06:52.664389 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:52 crc kubenswrapper[5017]: I0129 08:06:52.712318 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" podStartSLOduration=4.712281171 podStartE2EDuration="4.712281171s" podCreationTimestamp="2026-01-29 08:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:52.691507403 +0000 UTC m=+5499.065955013" watchObservedRunningTime="2026-01-29 08:06:52.712281171 +0000 UTC m=+5499.086728781" Jan 29 08:06:53 crc kubenswrapper[5017]: I0129 08:06:53.680291 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 08:06:53 crc kubenswrapper[5017]: I0129 08:06:53.975782 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:54 crc kubenswrapper[5017]: I0129 08:06:54.009003 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:06:54 crc kubenswrapper[5017]: I0129 08:06:54.009068 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:06:54 crc kubenswrapper[5017]: I0129 08:06:54.683045 5017 generic.go:334] "Generic (PLEG): container finished" podID="4ec216ce-93c5-4f31-8b8d-c05cfe023664" containerID="43fcc94a563999101b63549864c3c447ba0b9c05460eb5b950f5bb3200509006" exitCode=0 Jan 29 08:06:54 crc kubenswrapper[5017]: I0129 08:06:54.683114 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fbgqf" event={"ID":"4ec216ce-93c5-4f31-8b8d-c05cfe023664","Type":"ContainerDied","Data":"43fcc94a563999101b63549864c3c447ba0b9c05460eb5b950f5bb3200509006"} Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.141038 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.264643 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-combined-ca-bundle\") pod \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.264817 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4g62\" (UniqueName: \"kubernetes.io/projected/4ec216ce-93c5-4f31-8b8d-c05cfe023664-kube-api-access-k4g62\") pod \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.265043 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-config-data\") pod \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.265109 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-scripts\") pod \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\" (UID: \"4ec216ce-93c5-4f31-8b8d-c05cfe023664\") " Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.273938 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec216ce-93c5-4f31-8b8d-c05cfe023664-kube-api-access-k4g62" (OuterVolumeSpecName: "kube-api-access-k4g62") pod "4ec216ce-93c5-4f31-8b8d-c05cfe023664" (UID: "4ec216ce-93c5-4f31-8b8d-c05cfe023664"). InnerVolumeSpecName "kube-api-access-k4g62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.276907 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-scripts" (OuterVolumeSpecName: "scripts") pod "4ec216ce-93c5-4f31-8b8d-c05cfe023664" (UID: "4ec216ce-93c5-4f31-8b8d-c05cfe023664"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.299096 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ec216ce-93c5-4f31-8b8d-c05cfe023664" (UID: "4ec216ce-93c5-4f31-8b8d-c05cfe023664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.299622 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-config-data" (OuterVolumeSpecName: "config-data") pod "4ec216ce-93c5-4f31-8b8d-c05cfe023664" (UID: "4ec216ce-93c5-4f31-8b8d-c05cfe023664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.372528 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4g62\" (UniqueName: \"kubernetes.io/projected/4ec216ce-93c5-4f31-8b8d-c05cfe023664-kube-api-access-k4g62\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.372572 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.372584 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.372595 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ec216ce-93c5-4f31-8b8d-c05cfe023664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.539157 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.539241 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.539301 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.541004 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"654247425c9be1ad39bf4a420f5d59cd286d394ddb9732f299ef1d927e039684"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.541071 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://654247425c9be1ad39bf4a420f5d59cd286d394ddb9732f299ef1d927e039684" gracePeriod=600 Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.712443 5017 generic.go:334] "Generic (PLEG): container finished" podID="04718292-6393-48d0-9428-127033978d5a" containerID="0f5621f628677a551cda6250355e03dfb283863c189f11b417fe65d8d99430f4" exitCode=0 Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.712586 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pld5z" event={"ID":"04718292-6393-48d0-9428-127033978d5a","Type":"ContainerDied","Data":"0f5621f628677a551cda6250355e03dfb283863c189f11b417fe65d8d99430f4"} Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.716994 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fbgqf" event={"ID":"4ec216ce-93c5-4f31-8b8d-c05cfe023664","Type":"ContainerDied","Data":"93da70afcd02f6953cca07e5616ec18669b16c7d81520e9132c1d2fb39e14267"} Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.717061 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93da70afcd02f6953cca07e5616ec18669b16c7d81520e9132c1d2fb39e14267" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.717060 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fbgqf" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.729815 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="654247425c9be1ad39bf4a420f5d59cd286d394ddb9732f299ef1d927e039684" exitCode=0 Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.729883 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"654247425c9be1ad39bf4a420f5d59cd286d394ddb9732f299ef1d927e039684"} Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.730202 5017 scope.go:117] "RemoveContainer" containerID="c58783d1bd795ffac490b01aaccb060f3071e08daa88d33a09f7ce8715d64300" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.809235 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:06:56 crc kubenswrapper[5017]: E0129 08:06:56.809776 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec216ce-93c5-4f31-8b8d-c05cfe023664" containerName="nova-cell1-conductor-db-sync" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.809796 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec216ce-93c5-4f31-8b8d-c05cfe023664" containerName="nova-cell1-conductor-db-sync" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.810033 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec216ce-93c5-4f31-8b8d-c05cfe023664" containerName="nova-cell1-conductor-db-sync" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.810839 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.814371 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.853722 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.986532 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.986608 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7pt\" (UniqueName: \"kubernetes.io/projected/32079be6-0441-460f-b3fa-d05533ee59f5-kube-api-access-hr7pt\") pod \"nova-cell1-conductor-0\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:56 crc kubenswrapper[5017]: I0129 08:06:56.986690 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.088380 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.088555 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.088614 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7pt\" (UniqueName: \"kubernetes.io/projected/32079be6-0441-460f-b3fa-d05533ee59f5-kube-api-access-hr7pt\") pod \"nova-cell1-conductor-0\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.098821 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.099740 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.111595 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7pt\" (UniqueName: \"kubernetes.io/projected/32079be6-0441-460f-b3fa-d05533ee59f5-kube-api-access-hr7pt\") pod \"nova-cell1-conductor-0\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.146577 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.682291 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.743407 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645"} Jan 29 08:06:57 crc kubenswrapper[5017]: I0129 08:06:57.745241 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"32079be6-0441-460f-b3fa-d05533ee59f5","Type":"ContainerStarted","Data":"23d3f649d484204e1ad6e9196e5a77554606d0e6cce3a967a05c2c3b7572fb61"} Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.073051 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.216119 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-config-data\") pod \"04718292-6393-48d0-9428-127033978d5a\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.216211 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-scripts\") pod \"04718292-6393-48d0-9428-127033978d5a\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.216308 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhxlw\" (UniqueName: \"kubernetes.io/projected/04718292-6393-48d0-9428-127033978d5a-kube-api-access-zhxlw\") pod \"04718292-6393-48d0-9428-127033978d5a\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.216437 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-combined-ca-bundle\") pod \"04718292-6393-48d0-9428-127033978d5a\" (UID: \"04718292-6393-48d0-9428-127033978d5a\") " Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.221268 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-scripts" (OuterVolumeSpecName: "scripts") pod "04718292-6393-48d0-9428-127033978d5a" (UID: "04718292-6393-48d0-9428-127033978d5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.223008 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04718292-6393-48d0-9428-127033978d5a-kube-api-access-zhxlw" (OuterVolumeSpecName: "kube-api-access-zhxlw") pod "04718292-6393-48d0-9428-127033978d5a" (UID: "04718292-6393-48d0-9428-127033978d5a"). InnerVolumeSpecName "kube-api-access-zhxlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.243862 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-config-data" (OuterVolumeSpecName: "config-data") pod "04718292-6393-48d0-9428-127033978d5a" (UID: "04718292-6393-48d0-9428-127033978d5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.244925 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04718292-6393-48d0-9428-127033978d5a" (UID: "04718292-6393-48d0-9428-127033978d5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.324596 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.325440 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.325555 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhxlw\" (UniqueName: \"kubernetes.io/projected/04718292-6393-48d0-9428-127033978d5a-kube-api-access-zhxlw\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.325762 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04718292-6393-48d0-9428-127033978d5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.623310 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.623832 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.680903 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.731747 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.760043 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pld5z" event={"ID":"04718292-6393-48d0-9428-127033978d5a","Type":"ContainerDied","Data":"d4ad68a5fde9b34ca2f85aa4e80c64f33790e487c6c923a48b58892e3ba950d6"} Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.760072 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pld5z" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.760100 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4ad68a5fde9b34ca2f85aa4e80c64f33790e487c6c923a48b58892e3ba950d6" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.765791 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"32079be6-0441-460f-b3fa-d05533ee59f5","Type":"ContainerStarted","Data":"45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869"} Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.801626 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.805801 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.805773328 podStartE2EDuration="2.805773328s" podCreationTimestamp="2026-01-29 08:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:06:58.793913763 +0000 UTC m=+5505.168361383" watchObservedRunningTime="2026-01-29 08:06:58.805773328 +0000 UTC m=+5505.180220938" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.936166 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.936608 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-log" containerID="cri-o://b17854deb09661993fb86d7108702e5e928dbf49abb132cc8f3f4e803ed1df33" gracePeriod=30 Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.937135 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-api" containerID="cri-o://94fdaad3f32460a8669a46b8a00b925329cb4966f3ba9f2bc08ea19ea01945eb" gracePeriod=30 Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.968969 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.58:8774/\": EOF" Jan 29 08:06:58 crc kubenswrapper[5017]: I0129 08:06:58.976155 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.007632 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.008062 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerName="nova-metadata-log" containerID="cri-o://c9230020d8602b482b4a2f29fb3b352b2400e8e7148e8489821b16fdd17341cc" gracePeriod=30 Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.008571 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerName="nova-metadata-metadata" containerID="cri-o://8df70c4953fe6975a7d8f573c6c03b09eeaa1c2304361e954562917d262e208c" gracePeriod=30 Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.018279 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.58:8774/\": EOF" Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.022182 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.255159 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.332649 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bdcbbc8c-cl7gx"] Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.333290 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" podUID="b4033ebc-2592-46c2-bae5-e0ed922e85bc" containerName="dnsmasq-dns" containerID="cri-o://c75d6eb225024f79242044e5a63a39d57c546d3a4e9edc7e4d920fc4a410fc6f" gracePeriod=10 Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.406446 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.788850 5017 generic.go:334] "Generic (PLEG): container finished" podID="6464cf43-5d37-42ba-b987-79124757db7d" containerID="b17854deb09661993fb86d7108702e5e928dbf49abb132cc8f3f4e803ed1df33" exitCode=143 Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.788950 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6464cf43-5d37-42ba-b987-79124757db7d","Type":"ContainerDied","Data":"b17854deb09661993fb86d7108702e5e928dbf49abb132cc8f3f4e803ed1df33"} Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.798639 5017 generic.go:334] "Generic (PLEG): container finished" podID="b4033ebc-2592-46c2-bae5-e0ed922e85bc" containerID="c75d6eb225024f79242044e5a63a39d57c546d3a4e9edc7e4d920fc4a410fc6f" exitCode=0 Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.799156 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" event={"ID":"b4033ebc-2592-46c2-bae5-e0ed922e85bc","Type":"ContainerDied","Data":"c75d6eb225024f79242044e5a63a39d57c546d3a4e9edc7e4d920fc4a410fc6f"} Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.799803 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" event={"ID":"b4033ebc-2592-46c2-bae5-e0ed922e85bc","Type":"ContainerDied","Data":"2a088402ae2fed7539e4468a6e5d96e99e66f86c34f3033e9f85fac6d63ce88f"} Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.799823 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a088402ae2fed7539e4468a6e5d96e99e66f86c34f3033e9f85fac6d63ce88f" Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.823146 5017 generic.go:334] "Generic (PLEG): container finished" podID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerID="8df70c4953fe6975a7d8f573c6c03b09eeaa1c2304361e954562917d262e208c" exitCode=0 Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.823186 5017 generic.go:334] "Generic (PLEG): container finished" podID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerID="c9230020d8602b482b4a2f29fb3b352b2400e8e7148e8489821b16fdd17341cc" exitCode=143 Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.824096 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"850df674-c042-4147-b2dc-1dbae9c7a4b4","Type":"ContainerDied","Data":"8df70c4953fe6975a7d8f573c6c03b09eeaa1c2304361e954562917d262e208c"} Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.824167 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"850df674-c042-4147-b2dc-1dbae9c7a4b4","Type":"ContainerDied","Data":"c9230020d8602b482b4a2f29fb3b352b2400e8e7148e8489821b16fdd17341cc"} Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.825031 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.839478 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.850866 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.986780 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-nb\") pod \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.987033 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-dns-svc\") pod \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.987104 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-sb\") pod \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.987172 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf2nj\" (UniqueName: \"kubernetes.io/projected/b4033ebc-2592-46c2-bae5-e0ed922e85bc-kube-api-access-kf2nj\") pod \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.987212 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-config\") pod \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\" (UID: \"b4033ebc-2592-46c2-bae5-e0ed922e85bc\") " Jan 29 08:06:59 crc kubenswrapper[5017]: I0129 08:06:59.989646 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.000215 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4033ebc-2592-46c2-bae5-e0ed922e85bc-kube-api-access-kf2nj" (OuterVolumeSpecName: "kube-api-access-kf2nj") pod "b4033ebc-2592-46c2-bae5-e0ed922e85bc" (UID: "b4033ebc-2592-46c2-bae5-e0ed922e85bc"). InnerVolumeSpecName "kube-api-access-kf2nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.063409 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-config" (OuterVolumeSpecName: "config") pod "b4033ebc-2592-46c2-bae5-e0ed922e85bc" (UID: "b4033ebc-2592-46c2-bae5-e0ed922e85bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.080031 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b4033ebc-2592-46c2-bae5-e0ed922e85bc" (UID: "b4033ebc-2592-46c2-bae5-e0ed922e85bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.089206 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.089253 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf2nj\" (UniqueName: \"kubernetes.io/projected/b4033ebc-2592-46c2-bae5-e0ed922e85bc-kube-api-access-kf2nj\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.089280 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.094235 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b4033ebc-2592-46c2-bae5-e0ed922e85bc" (UID: "b4033ebc-2592-46c2-bae5-e0ed922e85bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.094291 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b4033ebc-2592-46c2-bae5-e0ed922e85bc" (UID: "b4033ebc-2592-46c2-bae5-e0ed922e85bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.190904 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850df674-c042-4147-b2dc-1dbae9c7a4b4-logs\") pod \"850df674-c042-4147-b2dc-1dbae9c7a4b4\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.191017 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-config-data\") pod \"850df674-c042-4147-b2dc-1dbae9c7a4b4\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.191117 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-combined-ca-bundle\") pod \"850df674-c042-4147-b2dc-1dbae9c7a4b4\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.191382 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbmlp\" (UniqueName: \"kubernetes.io/projected/850df674-c042-4147-b2dc-1dbae9c7a4b4-kube-api-access-cbmlp\") pod \"850df674-c042-4147-b2dc-1dbae9c7a4b4\" (UID: \"850df674-c042-4147-b2dc-1dbae9c7a4b4\") " Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.191997 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.192020 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4033ebc-2592-46c2-bae5-e0ed922e85bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.195443 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850df674-c042-4147-b2dc-1dbae9c7a4b4-kube-api-access-cbmlp" (OuterVolumeSpecName: "kube-api-access-cbmlp") pod "850df674-c042-4147-b2dc-1dbae9c7a4b4" (UID: "850df674-c042-4147-b2dc-1dbae9c7a4b4"). InnerVolumeSpecName "kube-api-access-cbmlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.195750 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850df674-c042-4147-b2dc-1dbae9c7a4b4-logs" (OuterVolumeSpecName: "logs") pod "850df674-c042-4147-b2dc-1dbae9c7a4b4" (UID: "850df674-c042-4147-b2dc-1dbae9c7a4b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.223758 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-config-data" (OuterVolumeSpecName: "config-data") pod "850df674-c042-4147-b2dc-1dbae9c7a4b4" (UID: "850df674-c042-4147-b2dc-1dbae9c7a4b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.227232 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "850df674-c042-4147-b2dc-1dbae9c7a4b4" (UID: "850df674-c042-4147-b2dc-1dbae9c7a4b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.294328 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.294435 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbmlp\" (UniqueName: \"kubernetes.io/projected/850df674-c042-4147-b2dc-1dbae9c7a4b4-kube-api-access-cbmlp\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.294451 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850df674-c042-4147-b2dc-1dbae9c7a4b4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.294463 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850df674-c042-4147-b2dc-1dbae9c7a4b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.835089 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2096ff6e-e6b6-444c-9f4f-c7737907d58a" containerName="nova-scheduler-scheduler" containerID="cri-o://63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f" gracePeriod=30 Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.835221 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.836484 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"850df674-c042-4147-b2dc-1dbae9c7a4b4","Type":"ContainerDied","Data":"ad8f9864082151bd55bdc03fac34770ed24e2c1ea9fd767263acbf15f109a91d"} Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.836558 5017 scope.go:117] "RemoveContainer" containerID="8df70c4953fe6975a7d8f573c6c03b09eeaa1c2304361e954562917d262e208c" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.836626 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bdcbbc8c-cl7gx" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.867109 5017 scope.go:117] "RemoveContainer" containerID="c9230020d8602b482b4a2f29fb3b352b2400e8e7148e8489821b16fdd17341cc" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.890029 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.900321 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.913356 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bdcbbc8c-cl7gx"] Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.925583 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bdcbbc8c-cl7gx"] Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.935479 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:00 crc kubenswrapper[5017]: E0129 08:07:00.936226 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerName="nova-metadata-metadata" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.936251 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerName="nova-metadata-metadata" Jan 29 08:07:00 crc kubenswrapper[5017]: E0129 08:07:00.936279 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4033ebc-2592-46c2-bae5-e0ed922e85bc" containerName="dnsmasq-dns" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.936290 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4033ebc-2592-46c2-bae5-e0ed922e85bc" containerName="dnsmasq-dns" Jan 29 08:07:00 crc kubenswrapper[5017]: E0129 08:07:00.936317 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4033ebc-2592-46c2-bae5-e0ed922e85bc" containerName="init" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.936327 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4033ebc-2592-46c2-bae5-e0ed922e85bc" containerName="init" Jan 29 08:07:00 crc kubenswrapper[5017]: E0129 08:07:00.936343 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerName="nova-metadata-log" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.936350 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerName="nova-metadata-log" Jan 29 08:07:00 crc kubenswrapper[5017]: E0129 08:07:00.936366 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04718292-6393-48d0-9428-127033978d5a" containerName="nova-manage" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.936375 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="04718292-6393-48d0-9428-127033978d5a" containerName="nova-manage" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.936649 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="04718292-6393-48d0-9428-127033978d5a" containerName="nova-manage" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.936670 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4033ebc-2592-46c2-bae5-e0ed922e85bc" containerName="dnsmasq-dns" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.936685 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerName="nova-metadata-metadata" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.936698 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="850df674-c042-4147-b2dc-1dbae9c7a4b4" containerName="nova-metadata-log" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.937924 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.942755 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 08:07:00 crc kubenswrapper[5017]: I0129 08:07:00.951154 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.111215 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.112217 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078397a7-6b11-41b2-96f7-e10be0bfc1a7-logs\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.112452 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-config-data\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.112603 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w4f7\" (UniqueName: \"kubernetes.io/projected/078397a7-6b11-41b2-96f7-e10be0bfc1a7-kube-api-access-5w4f7\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.214682 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-config-data\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.214776 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w4f7\" (UniqueName: \"kubernetes.io/projected/078397a7-6b11-41b2-96f7-e10be0bfc1a7-kube-api-access-5w4f7\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.214911 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.215027 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078397a7-6b11-41b2-96f7-e10be0bfc1a7-logs\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.215618 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078397a7-6b11-41b2-96f7-e10be0bfc1a7-logs\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.222752 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.227741 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-config-data\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.235397 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w4f7\" (UniqueName: \"kubernetes.io/projected/078397a7-6b11-41b2-96f7-e10be0bfc1a7-kube-api-access-5w4f7\") pod \"nova-metadata-0\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.273041 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.771205 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:01 crc kubenswrapper[5017]: I0129 08:07:01.848103 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"078397a7-6b11-41b2-96f7-e10be0bfc1a7","Type":"ContainerStarted","Data":"e59522f1dda354a76c946b673d1f897700367f1916ac26b2947c4d3ae0a7f18d"} Jan 29 08:07:02 crc kubenswrapper[5017]: I0129 08:07:02.180312 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 08:07:02 crc kubenswrapper[5017]: I0129 08:07:02.740498 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850df674-c042-4147-b2dc-1dbae9c7a4b4" path="/var/lib/kubelet/pods/850df674-c042-4147-b2dc-1dbae9c7a4b4/volumes" Jan 29 08:07:02 crc kubenswrapper[5017]: I0129 08:07:02.741672 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4033ebc-2592-46c2-bae5-e0ed922e85bc" path="/var/lib/kubelet/pods/b4033ebc-2592-46c2-bae5-e0ed922e85bc/volumes" Jan 29 08:07:02 crc kubenswrapper[5017]: I0129 08:07:02.884438 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"078397a7-6b11-41b2-96f7-e10be0bfc1a7","Type":"ContainerStarted","Data":"908571adb90dee3ab58cd04e224d437dbbf0ae6446f49a18adbc17235b27bc20"} Jan 29 08:07:02 crc kubenswrapper[5017]: I0129 08:07:02.885154 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"078397a7-6b11-41b2-96f7-e10be0bfc1a7","Type":"ContainerStarted","Data":"421fd3e61fe79b1538deb762a9458f78c99a3f6cad59c6753816504009dfd308"} Jan 29 08:07:02 crc kubenswrapper[5017]: I0129 08:07:02.893301 5017 generic.go:334] "Generic (PLEG): container finished" podID="2096ff6e-e6b6-444c-9f4f-c7737907d58a" containerID="63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f" exitCode=0 Jan 29 08:07:02 crc kubenswrapper[5017]: I0129 08:07:02.893343 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2096ff6e-e6b6-444c-9f4f-c7737907d58a","Type":"ContainerDied","Data":"63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f"} Jan 29 08:07:02 crc kubenswrapper[5017]: I0129 08:07:02.913899 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9138747780000003 podStartE2EDuration="2.913874778s" podCreationTimestamp="2026-01-29 08:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:02.908240383 +0000 UTC m=+5509.282688003" watchObservedRunningTime="2026-01-29 08:07:02.913874778 +0000 UTC m=+5509.288322388" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.142274 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ds8jd"] Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.143781 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.151191 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.151519 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.172377 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ds8jd"] Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.330752 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-scripts\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.330825 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.330879 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-config-data\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.330903 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwtk\" (UniqueName: \"kubernetes.io/projected/adf35922-c42a-4ea4-9d61-670469b4512a-kube-api-access-zpwtk\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.432978 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-scripts\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.435486 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.435564 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-config-data\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.435590 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpwtk\" (UniqueName: \"kubernetes.io/projected/adf35922-c42a-4ea4-9d61-670469b4512a-kube-api-access-zpwtk\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.441037 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-scripts\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.441849 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-config-data\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.452467 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.455980 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpwtk\" (UniqueName: \"kubernetes.io/projected/adf35922-c42a-4ea4-9d61-670469b4512a-kube-api-access-zpwtk\") pod \"nova-cell1-cell-mapping-ds8jd\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.466100 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:03 crc kubenswrapper[5017]: E0129 08:07:03.681739 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f is running failed: container process not found" containerID="63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:07:03 crc kubenswrapper[5017]: E0129 08:07:03.682258 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f is running failed: container process not found" containerID="63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:07:03 crc kubenswrapper[5017]: E0129 08:07:03.682492 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f is running failed: container process not found" containerID="63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:07:03 crc kubenswrapper[5017]: E0129 08:07:03.682539 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2096ff6e-e6b6-444c-9f4f-c7737907d58a" containerName="nova-scheduler-scheduler" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.739368 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.750640 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-combined-ca-bundle\") pod \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.750739 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-config-data\") pod \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.750839 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg7z8\" (UniqueName: \"kubernetes.io/projected/2096ff6e-e6b6-444c-9f4f-c7737907d58a-kube-api-access-hg7z8\") pod \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\" (UID: \"2096ff6e-e6b6-444c-9f4f-c7737907d58a\") " Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.759296 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2096ff6e-e6b6-444c-9f4f-c7737907d58a-kube-api-access-hg7z8" (OuterVolumeSpecName: "kube-api-access-hg7z8") pod "2096ff6e-e6b6-444c-9f4f-c7737907d58a" (UID: "2096ff6e-e6b6-444c-9f4f-c7737907d58a"). InnerVolumeSpecName "kube-api-access-hg7z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.788514 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-config-data" (OuterVolumeSpecName: "config-data") pod "2096ff6e-e6b6-444c-9f4f-c7737907d58a" (UID: "2096ff6e-e6b6-444c-9f4f-c7737907d58a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.789973 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2096ff6e-e6b6-444c-9f4f-c7737907d58a" (UID: "2096ff6e-e6b6-444c-9f4f-c7737907d58a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.853619 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg7z8\" (UniqueName: \"kubernetes.io/projected/2096ff6e-e6b6-444c-9f4f-c7737907d58a-kube-api-access-hg7z8\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.853676 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.853691 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2096ff6e-e6b6-444c-9f4f-c7737907d58a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.916637 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2096ff6e-e6b6-444c-9f4f-c7737907d58a","Type":"ContainerDied","Data":"d223eccdc908c4bb1e6d71b80d32172f34443881ff9b0bab10c0aaa22882dcef"} Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.916710 5017 scope.go:117] "RemoveContainer" containerID="63346eb38fa2ad7efe6061a8a511feb7dd6e1c21ea2cd0baf5fdf75f8025480f" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.916704 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.957480 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.971911 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.991507 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:03 crc kubenswrapper[5017]: E0129 08:07:03.992089 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2096ff6e-e6b6-444c-9f4f-c7737907d58a" containerName="nova-scheduler-scheduler" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.992112 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2096ff6e-e6b6-444c-9f4f-c7737907d58a" containerName="nova-scheduler-scheduler" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.992304 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="2096ff6e-e6b6-444c-9f4f-c7737907d58a" containerName="nova-scheduler-scheduler" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.993179 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:07:03 crc kubenswrapper[5017]: I0129 08:07:03.995981 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.011695 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:04 crc kubenswrapper[5017]: W0129 08:07:04.051237 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadf35922_c42a_4ea4_9d61_670469b4512a.slice/crio-c9b019986958a0359c26c53adff371461a0fa474077b2152cbba77ed49f5dd3c WatchSource:0}: Error finding container c9b019986958a0359c26c53adff371461a0fa474077b2152cbba77ed49f5dd3c: Status 404 returned error can't find the container with id c9b019986958a0359c26c53adff371461a0fa474077b2152cbba77ed49f5dd3c Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.058017 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-config-data\") pod \"nova-scheduler-0\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.058222 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.058321 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txdsr\" (UniqueName: \"kubernetes.io/projected/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-kube-api-access-txdsr\") pod \"nova-scheduler-0\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.070124 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ds8jd"] Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.160897 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-config-data\") pod \"nova-scheduler-0\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.161141 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.162004 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txdsr\" (UniqueName: \"kubernetes.io/projected/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-kube-api-access-txdsr\") pod \"nova-scheduler-0\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.168045 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.170675 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-config-data\") pod \"nova-scheduler-0\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.179789 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txdsr\" (UniqueName: \"kubernetes.io/projected/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-kube-api-access-txdsr\") pod \"nova-scheduler-0\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.313713 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.330429 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2096ff6e-e6b6-444c-9f4f-c7737907d58a" path="/var/lib/kubelet/pods/2096ff6e-e6b6-444c-9f4f-c7737907d58a/volumes" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.929204 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ds8jd" event={"ID":"adf35922-c42a-4ea4-9d61-670469b4512a","Type":"ContainerStarted","Data":"99ae773943c31d04b6d790a18468c00cc81e246935cdbe6dc6c0698fa9f56f20"} Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.929784 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ds8jd" event={"ID":"adf35922-c42a-4ea4-9d61-670469b4512a","Type":"ContainerStarted","Data":"c9b019986958a0359c26c53adff371461a0fa474077b2152cbba77ed49f5dd3c"} Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.949239 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ds8jd" podStartSLOduration=1.9492138639999999 podStartE2EDuration="1.949213864s" podCreationTimestamp="2026-01-29 08:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:04.945119215 +0000 UTC m=+5511.319566845" watchObservedRunningTime="2026-01-29 08:07:04.949213864 +0000 UTC m=+5511.323661474" Jan 29 08:07:04 crc kubenswrapper[5017]: I0129 08:07:04.978425 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:05 crc kubenswrapper[5017]: E0129 08:07:05.944359 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6464cf43_5d37_42ba_b987_79124757db7d.slice/crio-conmon-94fdaad3f32460a8669a46b8a00b925329cb4966f3ba9f2bc08ea19ea01945eb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6464cf43_5d37_42ba_b987_79124757db7d.slice/crio-94fdaad3f32460a8669a46b8a00b925329cb4966f3ba9f2bc08ea19ea01945eb.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:07:05 crc kubenswrapper[5017]: I0129 08:07:05.953375 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be","Type":"ContainerStarted","Data":"c48fe2d969f3d2c10a1b927126513c3c68f38e1e1c59e73cd560c4894c6cf255"} Jan 29 08:07:05 crc kubenswrapper[5017]: I0129 08:07:05.953430 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be","Type":"ContainerStarted","Data":"8c64307321db8545ac37b3e0d4ab7d09451053f6622d2c8ed013e44e536f8684"} Jan 29 08:07:05 crc kubenswrapper[5017]: I0129 08:07:05.961392 5017 generic.go:334] "Generic (PLEG): container finished" podID="6464cf43-5d37-42ba-b987-79124757db7d" containerID="94fdaad3f32460a8669a46b8a00b925329cb4966f3ba9f2bc08ea19ea01945eb" exitCode=0 Jan 29 08:07:05 crc kubenswrapper[5017]: I0129 08:07:05.961466 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6464cf43-5d37-42ba-b987-79124757db7d","Type":"ContainerDied","Data":"94fdaad3f32460a8669a46b8a00b925329cb4966f3ba9f2bc08ea19ea01945eb"} Jan 29 08:07:05 crc kubenswrapper[5017]: I0129 08:07:05.983683 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.983656109 podStartE2EDuration="2.983656109s" podCreationTimestamp="2026-01-29 08:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:05.976896437 +0000 UTC m=+5512.351344057" watchObservedRunningTime="2026-01-29 08:07:05.983656109 +0000 UTC m=+5512.358103719" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.241859 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.274062 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.274123 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.317560 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-combined-ca-bundle\") pod \"6464cf43-5d37-42ba-b987-79124757db7d\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.317615 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvwh5\" (UniqueName: \"kubernetes.io/projected/6464cf43-5d37-42ba-b987-79124757db7d-kube-api-access-pvwh5\") pod \"6464cf43-5d37-42ba-b987-79124757db7d\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.317796 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-config-data\") pod \"6464cf43-5d37-42ba-b987-79124757db7d\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.317852 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6464cf43-5d37-42ba-b987-79124757db7d-logs\") pod \"6464cf43-5d37-42ba-b987-79124757db7d\" (UID: \"6464cf43-5d37-42ba-b987-79124757db7d\") " Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.318425 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6464cf43-5d37-42ba-b987-79124757db7d-logs" (OuterVolumeSpecName: "logs") pod "6464cf43-5d37-42ba-b987-79124757db7d" (UID: "6464cf43-5d37-42ba-b987-79124757db7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.326776 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6464cf43-5d37-42ba-b987-79124757db7d-kube-api-access-pvwh5" (OuterVolumeSpecName: "kube-api-access-pvwh5") pod "6464cf43-5d37-42ba-b987-79124757db7d" (UID: "6464cf43-5d37-42ba-b987-79124757db7d"). InnerVolumeSpecName "kube-api-access-pvwh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.358140 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-config-data" (OuterVolumeSpecName: "config-data") pod "6464cf43-5d37-42ba-b987-79124757db7d" (UID: "6464cf43-5d37-42ba-b987-79124757db7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.380176 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6464cf43-5d37-42ba-b987-79124757db7d" (UID: "6464cf43-5d37-42ba-b987-79124757db7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.422282 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.422479 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6464cf43-5d37-42ba-b987-79124757db7d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.422531 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6464cf43-5d37-42ba-b987-79124757db7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.422550 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvwh5\" (UniqueName: \"kubernetes.io/projected/6464cf43-5d37-42ba-b987-79124757db7d-kube-api-access-pvwh5\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.973452 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6464cf43-5d37-42ba-b987-79124757db7d","Type":"ContainerDied","Data":"ab0f1e35975464d4be4af84503e9ad198d81855b1efd55919685842e894717ec"} Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.973492 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:07:06 crc kubenswrapper[5017]: I0129 08:07:06.973551 5017 scope.go:117] "RemoveContainer" containerID="94fdaad3f32460a8669a46b8a00b925329cb4966f3ba9f2bc08ea19ea01945eb" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.014439 5017 scope.go:117] "RemoveContainer" containerID="b17854deb09661993fb86d7108702e5e928dbf49abb132cc8f3f4e803ed1df33" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.019062 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.032054 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.047265 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:07 crc kubenswrapper[5017]: E0129 08:07:07.047787 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-api" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.047806 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-api" Jan 29 08:07:07 crc kubenswrapper[5017]: E0129 08:07:07.047836 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-log" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.047844 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-log" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.048032 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-log" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.048054 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6464cf43-5d37-42ba-b987-79124757db7d" containerName="nova-api-api" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.049148 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.053401 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.082067 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.136572 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b35887-79d2-4305-b245-25812eb4ed6a-logs\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.136700 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rftj2\" (UniqueName: \"kubernetes.io/projected/29b35887-79d2-4305-b245-25812eb4ed6a-kube-api-access-rftj2\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.136760 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.136864 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-config-data\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.238074 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-config-data\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.238169 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b35887-79d2-4305-b245-25812eb4ed6a-logs\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.238213 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rftj2\" (UniqueName: \"kubernetes.io/projected/29b35887-79d2-4305-b245-25812eb4ed6a-kube-api-access-rftj2\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.238263 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.238815 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b35887-79d2-4305-b245-25812eb4ed6a-logs\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.246900 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.247190 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-config-data\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.263093 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rftj2\" (UniqueName: \"kubernetes.io/projected/29b35887-79d2-4305-b245-25812eb4ed6a-kube-api-access-rftj2\") pod \"nova-api-0\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.399776 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.912023 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:07 crc kubenswrapper[5017]: I0129 08:07:07.991543 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29b35887-79d2-4305-b245-25812eb4ed6a","Type":"ContainerStarted","Data":"f01b71427daf002bceac8f95a1567df040415fb7c70ec239cb8c5971f9c0c7ce"} Jan 29 08:07:08 crc kubenswrapper[5017]: I0129 08:07:08.330600 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6464cf43-5d37-42ba-b987-79124757db7d" path="/var/lib/kubelet/pods/6464cf43-5d37-42ba-b987-79124757db7d/volumes" Jan 29 08:07:09 crc kubenswrapper[5017]: I0129 08:07:09.002093 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29b35887-79d2-4305-b245-25812eb4ed6a","Type":"ContainerStarted","Data":"7079fb357733a94cd88d6e2961c621500609db5cd7d53fff0481b2c34437efb8"} Jan 29 08:07:09 crc kubenswrapper[5017]: I0129 08:07:09.002169 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29b35887-79d2-4305-b245-25812eb4ed6a","Type":"ContainerStarted","Data":"2fe9d8890e465779de03247ddf9b47589f4443f84f489922d0999c6dca6ee932"} Jan 29 08:07:09 crc kubenswrapper[5017]: I0129 08:07:09.044483 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.044461475 podStartE2EDuration="2.044461475s" podCreationTimestamp="2026-01-29 08:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:09.033702596 +0000 UTC m=+5515.408150206" watchObservedRunningTime="2026-01-29 08:07:09.044461475 +0000 UTC m=+5515.418909085" Jan 29 08:07:09 crc kubenswrapper[5017]: I0129 08:07:09.313922 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 08:07:11 crc kubenswrapper[5017]: I0129 08:07:11.024407 5017 generic.go:334] "Generic (PLEG): container finished" podID="adf35922-c42a-4ea4-9d61-670469b4512a" containerID="99ae773943c31d04b6d790a18468c00cc81e246935cdbe6dc6c0698fa9f56f20" exitCode=0 Jan 29 08:07:11 crc kubenswrapper[5017]: I0129 08:07:11.024463 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ds8jd" event={"ID":"adf35922-c42a-4ea4-9d61-670469b4512a","Type":"ContainerDied","Data":"99ae773943c31d04b6d790a18468c00cc81e246935cdbe6dc6c0698fa9f56f20"} Jan 29 08:07:11 crc kubenswrapper[5017]: I0129 08:07:11.274228 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:07:11 crc kubenswrapper[5017]: I0129 08:07:11.274510 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.358188 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.358214 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.449047 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.587423 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-combined-ca-bundle\") pod \"adf35922-c42a-4ea4-9d61-670469b4512a\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.588738 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpwtk\" (UniqueName: \"kubernetes.io/projected/adf35922-c42a-4ea4-9d61-670469b4512a-kube-api-access-zpwtk\") pod \"adf35922-c42a-4ea4-9d61-670469b4512a\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.588787 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-config-data\") pod \"adf35922-c42a-4ea4-9d61-670469b4512a\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.588832 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-scripts\") pod \"adf35922-c42a-4ea4-9d61-670469b4512a\" (UID: \"adf35922-c42a-4ea4-9d61-670469b4512a\") " Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.596026 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-scripts" (OuterVolumeSpecName: "scripts") pod "adf35922-c42a-4ea4-9d61-670469b4512a" (UID: "adf35922-c42a-4ea4-9d61-670469b4512a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.620712 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf35922-c42a-4ea4-9d61-670469b4512a-kube-api-access-zpwtk" (OuterVolumeSpecName: "kube-api-access-zpwtk") pod "adf35922-c42a-4ea4-9d61-670469b4512a" (UID: "adf35922-c42a-4ea4-9d61-670469b4512a"). InnerVolumeSpecName "kube-api-access-zpwtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.624036 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-config-data" (OuterVolumeSpecName: "config-data") pod "adf35922-c42a-4ea4-9d61-670469b4512a" (UID: "adf35922-c42a-4ea4-9d61-670469b4512a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.631035 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adf35922-c42a-4ea4-9d61-670469b4512a" (UID: "adf35922-c42a-4ea4-9d61-670469b4512a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.697779 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.697838 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.697855 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpwtk\" (UniqueName: \"kubernetes.io/projected/adf35922-c42a-4ea4-9d61-670469b4512a-kube-api-access-zpwtk\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:12 crc kubenswrapper[5017]: I0129 08:07:12.697868 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf35922-c42a-4ea4-9d61-670469b4512a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.045660 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ds8jd" event={"ID":"adf35922-c42a-4ea4-9d61-670469b4512a","Type":"ContainerDied","Data":"c9b019986958a0359c26c53adff371461a0fa474077b2152cbba77ed49f5dd3c"} Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.045722 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b019986958a0359c26c53adff371461a0fa474077b2152cbba77ed49f5dd3c" Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.045799 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ds8jd" Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.243653 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.244163 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be" containerName="nova-scheduler-scheduler" containerID="cri-o://c48fe2d969f3d2c10a1b927126513c3c68f38e1e1c59e73cd560c4894c6cf255" gracePeriod=30 Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.253702 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.253936 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29b35887-79d2-4305-b245-25812eb4ed6a" containerName="nova-api-log" containerID="cri-o://2fe9d8890e465779de03247ddf9b47589f4443f84f489922d0999c6dca6ee932" gracePeriod=30 Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.254843 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="29b35887-79d2-4305-b245-25812eb4ed6a" containerName="nova-api-api" containerID="cri-o://7079fb357733a94cd88d6e2961c621500609db5cd7d53fff0481b2c34437efb8" gracePeriod=30 Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.410129 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.411454 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-log" containerID="cri-o://421fd3e61fe79b1538deb762a9458f78c99a3f6cad59c6753816504009dfd308" gracePeriod=30 Jan 29 08:07:13 crc kubenswrapper[5017]: I0129 08:07:13.412124 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-metadata" containerID="cri-o://908571adb90dee3ab58cd04e224d437dbbf0ae6446f49a18adbc17235b27bc20" gracePeriod=30 Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.085263 5017 generic.go:334] "Generic (PLEG): container finished" podID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerID="421fd3e61fe79b1538deb762a9458f78c99a3f6cad59c6753816504009dfd308" exitCode=143 Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.085766 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"078397a7-6b11-41b2-96f7-e10be0bfc1a7","Type":"ContainerDied","Data":"421fd3e61fe79b1538deb762a9458f78c99a3f6cad59c6753816504009dfd308"} Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.089121 5017 generic.go:334] "Generic (PLEG): container finished" podID="29b35887-79d2-4305-b245-25812eb4ed6a" containerID="7079fb357733a94cd88d6e2961c621500609db5cd7d53fff0481b2c34437efb8" exitCode=0 Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.089143 5017 generic.go:334] "Generic (PLEG): container finished" podID="29b35887-79d2-4305-b245-25812eb4ed6a" containerID="2fe9d8890e465779de03247ddf9b47589f4443f84f489922d0999c6dca6ee932" exitCode=143 Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.089162 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29b35887-79d2-4305-b245-25812eb4ed6a","Type":"ContainerDied","Data":"7079fb357733a94cd88d6e2961c621500609db5cd7d53fff0481b2c34437efb8"} Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.089184 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29b35887-79d2-4305-b245-25812eb4ed6a","Type":"ContainerDied","Data":"2fe9d8890e465779de03247ddf9b47589f4443f84f489922d0999c6dca6ee932"} Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.268129 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.432518 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rftj2\" (UniqueName: \"kubernetes.io/projected/29b35887-79d2-4305-b245-25812eb4ed6a-kube-api-access-rftj2\") pod \"29b35887-79d2-4305-b245-25812eb4ed6a\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.432687 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-config-data\") pod \"29b35887-79d2-4305-b245-25812eb4ed6a\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.432791 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b35887-79d2-4305-b245-25812eb4ed6a-logs\") pod \"29b35887-79d2-4305-b245-25812eb4ed6a\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.432831 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-combined-ca-bundle\") pod \"29b35887-79d2-4305-b245-25812eb4ed6a\" (UID: \"29b35887-79d2-4305-b245-25812eb4ed6a\") " Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.433280 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b35887-79d2-4305-b245-25812eb4ed6a-logs" (OuterVolumeSpecName: "logs") pod "29b35887-79d2-4305-b245-25812eb4ed6a" (UID: "29b35887-79d2-4305-b245-25812eb4ed6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.433378 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29b35887-79d2-4305-b245-25812eb4ed6a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.447292 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b35887-79d2-4305-b245-25812eb4ed6a-kube-api-access-rftj2" (OuterVolumeSpecName: "kube-api-access-rftj2") pod "29b35887-79d2-4305-b245-25812eb4ed6a" (UID: "29b35887-79d2-4305-b245-25812eb4ed6a"). InnerVolumeSpecName "kube-api-access-rftj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.465744 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29b35887-79d2-4305-b245-25812eb4ed6a" (UID: "29b35887-79d2-4305-b245-25812eb4ed6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.479161 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-config-data" (OuterVolumeSpecName: "config-data") pod "29b35887-79d2-4305-b245-25812eb4ed6a" (UID: "29b35887-79d2-4305-b245-25812eb4ed6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.535327 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rftj2\" (UniqueName: \"kubernetes.io/projected/29b35887-79d2-4305-b245-25812eb4ed6a-kube-api-access-rftj2\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.535390 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:14 crc kubenswrapper[5017]: I0129 08:07:14.535403 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b35887-79d2-4305-b245-25812eb4ed6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.106809 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29b35887-79d2-4305-b245-25812eb4ed6a","Type":"ContainerDied","Data":"f01b71427daf002bceac8f95a1567df040415fb7c70ec239cb8c5971f9c0c7ce"} Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.106951 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.107375 5017 scope.go:117] "RemoveContainer" containerID="7079fb357733a94cd88d6e2961c621500609db5cd7d53fff0481b2c34437efb8" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.143197 5017 scope.go:117] "RemoveContainer" containerID="2fe9d8890e465779de03247ddf9b47589f4443f84f489922d0999c6dca6ee932" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.160506 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.171877 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.193905 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:15 crc kubenswrapper[5017]: E0129 08:07:15.194490 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b35887-79d2-4305-b245-25812eb4ed6a" containerName="nova-api-api" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.194507 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b35887-79d2-4305-b245-25812eb4ed6a" containerName="nova-api-api" Jan 29 08:07:15 crc kubenswrapper[5017]: E0129 08:07:15.194526 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b35887-79d2-4305-b245-25812eb4ed6a" containerName="nova-api-log" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.194533 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b35887-79d2-4305-b245-25812eb4ed6a" containerName="nova-api-log" Jan 29 08:07:15 crc kubenswrapper[5017]: E0129 08:07:15.194550 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf35922-c42a-4ea4-9d61-670469b4512a" containerName="nova-manage" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.194558 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf35922-c42a-4ea4-9d61-670469b4512a" containerName="nova-manage" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.194741 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b35887-79d2-4305-b245-25812eb4ed6a" containerName="nova-api-log" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.194767 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf35922-c42a-4ea4-9d61-670469b4512a" containerName="nova-manage" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.194801 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b35887-79d2-4305-b245-25812eb4ed6a" containerName="nova-api-api" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.196073 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.201332 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.204733 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.349985 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.350072 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbqkm\" (UniqueName: \"kubernetes.io/projected/89895d2a-1660-449b-8bf2-ea704cb93dd1-kube-api-access-mbqkm\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.350350 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89895d2a-1660-449b-8bf2-ea704cb93dd1-logs\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.350531 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-config-data\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.453121 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbqkm\" (UniqueName: \"kubernetes.io/projected/89895d2a-1660-449b-8bf2-ea704cb93dd1-kube-api-access-mbqkm\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.453239 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89895d2a-1660-449b-8bf2-ea704cb93dd1-logs\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.453296 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-config-data\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.453940 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89895d2a-1660-449b-8bf2-ea704cb93dd1-logs\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.455497 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.467101 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.468817 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-config-data\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.473201 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbqkm\" (UniqueName: \"kubernetes.io/projected/89895d2a-1660-449b-8bf2-ea704cb93dd1-kube-api-access-mbqkm\") pod \"nova-api-0\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.527645 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:07:15 crc kubenswrapper[5017]: I0129 08:07:15.987465 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:07:15 crc kubenswrapper[5017]: W0129 08:07:15.997525 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89895d2a_1660_449b_8bf2_ea704cb93dd1.slice/crio-c5afa3e6ceab2ae4266e354e7c7544c10e3bf5e5ed1050a254e6d53e6ddc52c1 WatchSource:0}: Error finding container c5afa3e6ceab2ae4266e354e7c7544c10e3bf5e5ed1050a254e6d53e6ddc52c1: Status 404 returned error can't find the container with id c5afa3e6ceab2ae4266e354e7c7544c10e3bf5e5ed1050a254e6d53e6ddc52c1 Jan 29 08:07:16 crc kubenswrapper[5017]: I0129 08:07:16.121031 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89895d2a-1660-449b-8bf2-ea704cb93dd1","Type":"ContainerStarted","Data":"c5afa3e6ceab2ae4266e354e7c7544c10e3bf5e5ed1050a254e6d53e6ddc52c1"} Jan 29 08:07:16 crc kubenswrapper[5017]: I0129 08:07:16.329212 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b35887-79d2-4305-b245-25812eb4ed6a" path="/var/lib/kubelet/pods/29b35887-79d2-4305-b245-25812eb4ed6a/volumes" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.147126 5017 generic.go:334] "Generic (PLEG): container finished" podID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerID="908571adb90dee3ab58cd04e224d437dbbf0ae6446f49a18adbc17235b27bc20" exitCode=0 Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.147239 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"078397a7-6b11-41b2-96f7-e10be0bfc1a7","Type":"ContainerDied","Data":"908571adb90dee3ab58cd04e224d437dbbf0ae6446f49a18adbc17235b27bc20"} Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.152153 5017 generic.go:334] "Generic (PLEG): container finished" podID="8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be" containerID="c48fe2d969f3d2c10a1b927126513c3c68f38e1e1c59e73cd560c4894c6cf255" exitCode=0 Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.152221 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be","Type":"ContainerDied","Data":"c48fe2d969f3d2c10a1b927126513c3c68f38e1e1c59e73cd560c4894c6cf255"} Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.155012 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89895d2a-1660-449b-8bf2-ea704cb93dd1","Type":"ContainerStarted","Data":"f2269f40906caac53176c413f7b85fd16c81dcd3c9ac2beae85ded2767b45b89"} Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.155046 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89895d2a-1660-449b-8bf2-ea704cb93dd1","Type":"ContainerStarted","Data":"282b0d36dfa465cecc6e292600dda239b9da90c42c0091e35a8c4e10eab4296c"} Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.187333 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.187293504 podStartE2EDuration="2.187293504s" podCreationTimestamp="2026-01-29 08:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:17.181608037 +0000 UTC m=+5523.556055657" watchObservedRunningTime="2026-01-29 08:07:17.187293504 +0000 UTC m=+5523.561741104" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.321995 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.328381 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.405690 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-combined-ca-bundle\") pod \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.405866 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txdsr\" (UniqueName: \"kubernetes.io/projected/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-kube-api-access-txdsr\") pod \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.406026 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-config-data\") pod \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\" (UID: \"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be\") " Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.446611 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-kube-api-access-txdsr" (OuterVolumeSpecName: "kube-api-access-txdsr") pod "8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be" (UID: "8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be"). InnerVolumeSpecName "kube-api-access-txdsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.505444 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-config-data" (OuterVolumeSpecName: "config-data") pod "8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be" (UID: "8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.517017 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-config-data\") pod \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.517081 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078397a7-6b11-41b2-96f7-e10be0bfc1a7-logs\") pod \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.517191 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-combined-ca-bundle\") pod \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.517382 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w4f7\" (UniqueName: \"kubernetes.io/projected/078397a7-6b11-41b2-96f7-e10be0bfc1a7-kube-api-access-5w4f7\") pod \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\" (UID: \"078397a7-6b11-41b2-96f7-e10be0bfc1a7\") " Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.528671 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078397a7-6b11-41b2-96f7-e10be0bfc1a7-logs" (OuterVolumeSpecName: "logs") pod "078397a7-6b11-41b2-96f7-e10be0bfc1a7" (UID: "078397a7-6b11-41b2-96f7-e10be0bfc1a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.529115 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txdsr\" (UniqueName: \"kubernetes.io/projected/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-kube-api-access-txdsr\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.529137 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.529150 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/078397a7-6b11-41b2-96f7-e10be0bfc1a7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.535592 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be" (UID: "8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.553706 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078397a7-6b11-41b2-96f7-e10be0bfc1a7-kube-api-access-5w4f7" (OuterVolumeSpecName: "kube-api-access-5w4f7") pod "078397a7-6b11-41b2-96f7-e10be0bfc1a7" (UID: "078397a7-6b11-41b2-96f7-e10be0bfc1a7"). InnerVolumeSpecName "kube-api-access-5w4f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.651672 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.651732 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w4f7\" (UniqueName: \"kubernetes.io/projected/078397a7-6b11-41b2-96f7-e10be0bfc1a7-kube-api-access-5w4f7\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.665247 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-config-data" (OuterVolumeSpecName: "config-data") pod "078397a7-6b11-41b2-96f7-e10be0bfc1a7" (UID: "078397a7-6b11-41b2-96f7-e10be0bfc1a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.691226 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "078397a7-6b11-41b2-96f7-e10be0bfc1a7" (UID: "078397a7-6b11-41b2-96f7-e10be0bfc1a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.753708 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:17 crc kubenswrapper[5017]: I0129 08:07:17.753789 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078397a7-6b11-41b2-96f7-e10be0bfc1a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.173546 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"078397a7-6b11-41b2-96f7-e10be0bfc1a7","Type":"ContainerDied","Data":"e59522f1dda354a76c946b673d1f897700367f1916ac26b2947c4d3ae0a7f18d"} Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.173635 5017 scope.go:117] "RemoveContainer" containerID="908571adb90dee3ab58cd04e224d437dbbf0ae6446f49a18adbc17235b27bc20" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.173557 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.177377 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be","Type":"ContainerDied","Data":"8c64307321db8545ac37b3e0d4ab7d09451053f6622d2c8ed013e44e536f8684"} Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.177417 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.200869 5017 scope.go:117] "RemoveContainer" containerID="421fd3e61fe79b1538deb762a9458f78c99a3f6cad59c6753816504009dfd308" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.225478 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.238194 5017 scope.go:117] "RemoveContainer" containerID="c48fe2d969f3d2c10a1b927126513c3c68f38e1e1c59e73cd560c4894c6cf255" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.247007 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.279239 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.304277 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.343908 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" path="/var/lib/kubelet/pods/078397a7-6b11-41b2-96f7-e10be0bfc1a7/volumes" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.344622 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be" path="/var/lib/kubelet/pods/8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be/volumes" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.345384 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:18 crc kubenswrapper[5017]: E0129 08:07:18.345768 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be" containerName="nova-scheduler-scheduler" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.345793 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be" containerName="nova-scheduler-scheduler" Jan 29 08:07:18 crc kubenswrapper[5017]: E0129 08:07:18.345807 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-metadata" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.345816 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-metadata" Jan 29 08:07:18 crc kubenswrapper[5017]: E0129 08:07:18.345857 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-log" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.345863 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-log" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.346668 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de9521a-dc9e-4dc3-a51d-fb1be1bfe2be" containerName="nova-scheduler-scheduler" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.346703 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-metadata" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.346715 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="078397a7-6b11-41b2-96f7-e10be0bfc1a7" containerName="nova-metadata-log" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.348119 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.348420 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.348545 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.351300 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.358867 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.361787 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.362120 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.364371 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cfds\" (UniqueName: \"kubernetes.io/projected/570006eb-aed7-44e3-89f0-61483bfa5fc3-kube-api-access-2cfds\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.364413 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-config-data\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.364517 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.364565 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/570006eb-aed7-44e3-89f0-61483bfa5fc3-logs\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.482797 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-config-data\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.483439 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.483565 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.483637 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/570006eb-aed7-44e3-89f0-61483bfa5fc3-logs\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.483813 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-config-data\") pod \"nova-scheduler-0\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.484094 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6db\" (UniqueName: \"kubernetes.io/projected/0bd61008-f300-4b3d-afee-3c9c00f3bb43-kube-api-access-zv6db\") pod \"nova-scheduler-0\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.484199 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/570006eb-aed7-44e3-89f0-61483bfa5fc3-logs\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.484209 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cfds\" (UniqueName: \"kubernetes.io/projected/570006eb-aed7-44e3-89f0-61483bfa5fc3-kube-api-access-2cfds\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.487498 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-config-data\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.498619 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.502268 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cfds\" (UniqueName: \"kubernetes.io/projected/570006eb-aed7-44e3-89f0-61483bfa5fc3-kube-api-access-2cfds\") pod \"nova-metadata-0\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.585750 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6db\" (UniqueName: \"kubernetes.io/projected/0bd61008-f300-4b3d-afee-3c9c00f3bb43-kube-api-access-zv6db\") pod \"nova-scheduler-0\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.585871 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.586071 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-config-data\") pod \"nova-scheduler-0\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.591644 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.591638 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-config-data\") pod \"nova-scheduler-0\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.604576 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6db\" (UniqueName: \"kubernetes.io/projected/0bd61008-f300-4b3d-afee-3c9c00f3bb43-kube-api-access-zv6db\") pod \"nova-scheduler-0\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " pod="openstack/nova-scheduler-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.674849 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:07:18 crc kubenswrapper[5017]: I0129 08:07:18.691368 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:07:19 crc kubenswrapper[5017]: I0129 08:07:19.160384 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:07:19 crc kubenswrapper[5017]: I0129 08:07:19.216845 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:07:19 crc kubenswrapper[5017]: I0129 08:07:19.217361 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"570006eb-aed7-44e3-89f0-61483bfa5fc3","Type":"ContainerStarted","Data":"c459eaf9c95dd5a8a0a1c621945ebad32cf4b222371aac8493c32443641680d2"} Jan 29 08:07:19 crc kubenswrapper[5017]: W0129 08:07:19.229432 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd61008_f300_4b3d_afee_3c9c00f3bb43.slice/crio-cec999a3f2ea500e5fc42996a2fb478417b5cad3c073a3c0ca255438a22ec885 WatchSource:0}: Error finding container cec999a3f2ea500e5fc42996a2fb478417b5cad3c073a3c0ca255438a22ec885: Status 404 returned error can't find the container with id cec999a3f2ea500e5fc42996a2fb478417b5cad3c073a3c0ca255438a22ec885 Jan 29 08:07:20 crc kubenswrapper[5017]: I0129 08:07:20.228886 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"570006eb-aed7-44e3-89f0-61483bfa5fc3","Type":"ContainerStarted","Data":"dcd0b9896f21e9a2551116172d6a8327bc3b1ab3cee5858973770899221a43d8"} Jan 29 08:07:20 crc kubenswrapper[5017]: I0129 08:07:20.229460 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"570006eb-aed7-44e3-89f0-61483bfa5fc3","Type":"ContainerStarted","Data":"4365d3d9f73fb118fa6d5b7d6dd4f02f76ed3b0ef373e8fbec3379bc9fa32db8"} Jan 29 08:07:20 crc kubenswrapper[5017]: I0129 08:07:20.231896 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0bd61008-f300-4b3d-afee-3c9c00f3bb43","Type":"ContainerStarted","Data":"84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d"} Jan 29 08:07:20 crc kubenswrapper[5017]: I0129 08:07:20.231942 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0bd61008-f300-4b3d-afee-3c9c00f3bb43","Type":"ContainerStarted","Data":"cec999a3f2ea500e5fc42996a2fb478417b5cad3c073a3c0ca255438a22ec885"} Jan 29 08:07:20 crc kubenswrapper[5017]: I0129 08:07:20.290281 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.290256011 podStartE2EDuration="2.290256011s" podCreationTimestamp="2026-01-29 08:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:20.26354135 +0000 UTC m=+5526.637988970" watchObservedRunningTime="2026-01-29 08:07:20.290256011 +0000 UTC m=+5526.664703621" Jan 29 08:07:20 crc kubenswrapper[5017]: I0129 08:07:20.292348 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.292339701 podStartE2EDuration="2.292339701s" podCreationTimestamp="2026-01-29 08:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:20.289177355 +0000 UTC m=+5526.663624995" watchObservedRunningTime="2026-01-29 08:07:20.292339701 +0000 UTC m=+5526.666787311" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.622560 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qlngd"] Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.625424 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.638617 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlngd"] Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.695044 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.695130 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.695148 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.697889 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-catalog-content\") pod \"community-operators-qlngd\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.697972 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wzbj\" (UniqueName: \"kubernetes.io/projected/449c691d-7745-4598-947d-0db73acce3fa-kube-api-access-7wzbj\") pod \"community-operators-qlngd\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.698200 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-utilities\") pod \"community-operators-qlngd\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.801374 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-catalog-content\") pod \"community-operators-qlngd\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.801472 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wzbj\" (UniqueName: \"kubernetes.io/projected/449c691d-7745-4598-947d-0db73acce3fa-kube-api-access-7wzbj\") pod \"community-operators-qlngd\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.801641 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-utilities\") pod \"community-operators-qlngd\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.802089 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-catalog-content\") pod \"community-operators-qlngd\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.802147 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-utilities\") pod \"community-operators-qlngd\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.826449 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wzbj\" (UniqueName: \"kubernetes.io/projected/449c691d-7745-4598-947d-0db73acce3fa-kube-api-access-7wzbj\") pod \"community-operators-qlngd\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:23 crc kubenswrapper[5017]: I0129 08:07:23.986742 5017 scope.go:117] "RemoveContainer" containerID="34dca0050763a2da4967372bd8aba9facd08414b60e496a43378969d02274d69" Jan 29 08:07:24 crc kubenswrapper[5017]: I0129 08:07:24.025141 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:24 crc kubenswrapper[5017]: I0129 08:07:24.633506 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlngd"] Jan 29 08:07:25 crc kubenswrapper[5017]: I0129 08:07:25.290623 5017 generic.go:334] "Generic (PLEG): container finished" podID="449c691d-7745-4598-947d-0db73acce3fa" containerID="362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa" exitCode=0 Jan 29 08:07:25 crc kubenswrapper[5017]: I0129 08:07:25.290992 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlngd" event={"ID":"449c691d-7745-4598-947d-0db73acce3fa","Type":"ContainerDied","Data":"362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa"} Jan 29 08:07:25 crc kubenswrapper[5017]: I0129 08:07:25.291503 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlngd" event={"ID":"449c691d-7745-4598-947d-0db73acce3fa","Type":"ContainerStarted","Data":"f28a06a7e7a18986916ae000d7ba1924ae17380bb81bb783142586895791c1bd"} Jan 29 08:07:25 crc kubenswrapper[5017]: I0129 08:07:25.528112 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:07:25 crc kubenswrapper[5017]: I0129 08:07:25.528443 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:07:26 crc kubenswrapper[5017]: I0129 08:07:26.611225 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:07:26 crc kubenswrapper[5017]: I0129 08:07:26.611331 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:07:27 crc kubenswrapper[5017]: I0129 08:07:27.319097 5017 generic.go:334] "Generic (PLEG): container finished" podID="449c691d-7745-4598-947d-0db73acce3fa" containerID="b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d" exitCode=0 Jan 29 08:07:27 crc kubenswrapper[5017]: I0129 08:07:27.319162 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlngd" event={"ID":"449c691d-7745-4598-947d-0db73acce3fa","Type":"ContainerDied","Data":"b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d"} Jan 29 08:07:28 crc kubenswrapper[5017]: I0129 08:07:28.329845 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlngd" event={"ID":"449c691d-7745-4598-947d-0db73acce3fa","Type":"ContainerStarted","Data":"b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892"} Jan 29 08:07:28 crc kubenswrapper[5017]: I0129 08:07:28.355852 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qlngd" podStartSLOduration=2.76490926 podStartE2EDuration="5.355829725s" podCreationTimestamp="2026-01-29 08:07:23 +0000 UTC" firstStartedPulling="2026-01-29 08:07:25.29632099 +0000 UTC m=+5531.670768640" lastFinishedPulling="2026-01-29 08:07:27.887241485 +0000 UTC m=+5534.261689105" observedRunningTime="2026-01-29 08:07:28.350295022 +0000 UTC m=+5534.724742652" watchObservedRunningTime="2026-01-29 08:07:28.355829725 +0000 UTC m=+5534.730277335" Jan 29 08:07:28 crc kubenswrapper[5017]: I0129 08:07:28.675431 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:07:28 crc kubenswrapper[5017]: I0129 08:07:28.675497 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:07:28 crc kubenswrapper[5017]: I0129 08:07:28.692404 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 08:07:28 crc kubenswrapper[5017]: I0129 08:07:28.722875 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 08:07:29 crc kubenswrapper[5017]: I0129 08:07:29.403207 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 08:07:29 crc kubenswrapper[5017]: I0129 08:07:29.758360 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.70:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:07:29 crc kubenswrapper[5017]: I0129 08:07:29.758363 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.70:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:07:34 crc kubenswrapper[5017]: I0129 08:07:34.026138 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:34 crc kubenswrapper[5017]: I0129 08:07:34.026768 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:34 crc kubenswrapper[5017]: I0129 08:07:34.072977 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:34 crc kubenswrapper[5017]: I0129 08:07:34.443191 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:34 crc kubenswrapper[5017]: I0129 08:07:34.501327 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlngd"] Jan 29 08:07:35 crc kubenswrapper[5017]: I0129 08:07:35.535658 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:07:35 crc kubenswrapper[5017]: I0129 08:07:35.536755 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:07:35 crc kubenswrapper[5017]: I0129 08:07:35.543576 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:07:35 crc kubenswrapper[5017]: I0129 08:07:35.546346 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.413330 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qlngd" podUID="449c691d-7745-4598-947d-0db73acce3fa" containerName="registry-server" containerID="cri-o://b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892" gracePeriod=2 Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.413435 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.419336 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.673767 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56768c9697-24rfp"] Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.679061 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.700928 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56768c9697-24rfp"] Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.742815 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-sb\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.742925 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-config\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.742988 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s5xk\" (UniqueName: \"kubernetes.io/projected/afeb803e-6e7b-42f8-a146-7fbc986e4a80-kube-api-access-2s5xk\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.743020 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-nb\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.743073 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-dns-svc\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.848983 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s5xk\" (UniqueName: \"kubernetes.io/projected/afeb803e-6e7b-42f8-a146-7fbc986e4a80-kube-api-access-2s5xk\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.849168 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-nb\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.849339 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-dns-svc\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.850229 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-sb\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.850409 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-config\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.851054 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-dns-svc\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.854371 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-config\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.856667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-sb\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.860455 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-nb\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:36 crc kubenswrapper[5017]: I0129 08:07:36.902169 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s5xk\" (UniqueName: \"kubernetes.io/projected/afeb803e-6e7b-42f8-a146-7fbc986e4a80-kube-api-access-2s5xk\") pod \"dnsmasq-dns-56768c9697-24rfp\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.020913 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.114058 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.263530 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-utilities" (OuterVolumeSpecName: "utilities") pod "449c691d-7745-4598-947d-0db73acce3fa" (UID: "449c691d-7745-4598-947d-0db73acce3fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.261507 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-utilities\") pod \"449c691d-7745-4598-947d-0db73acce3fa\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.264441 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-catalog-content\") pod \"449c691d-7745-4598-947d-0db73acce3fa\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.265610 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wzbj\" (UniqueName: \"kubernetes.io/projected/449c691d-7745-4598-947d-0db73acce3fa-kube-api-access-7wzbj\") pod \"449c691d-7745-4598-947d-0db73acce3fa\" (UID: \"449c691d-7745-4598-947d-0db73acce3fa\") " Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.266639 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.279849 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449c691d-7745-4598-947d-0db73acce3fa-kube-api-access-7wzbj" (OuterVolumeSpecName: "kube-api-access-7wzbj") pod "449c691d-7745-4598-947d-0db73acce3fa" (UID: "449c691d-7745-4598-947d-0db73acce3fa"). InnerVolumeSpecName "kube-api-access-7wzbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.343859 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "449c691d-7745-4598-947d-0db73acce3fa" (UID: "449c691d-7745-4598-947d-0db73acce3fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.369068 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/449c691d-7745-4598-947d-0db73acce3fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.369110 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wzbj\" (UniqueName: \"kubernetes.io/projected/449c691d-7745-4598-947d-0db73acce3fa-kube-api-access-7wzbj\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.444778 5017 generic.go:334] "Generic (PLEG): container finished" podID="449c691d-7745-4598-947d-0db73acce3fa" containerID="b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892" exitCode=0 Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.445573 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlngd" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.446128 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlngd" event={"ID":"449c691d-7745-4598-947d-0db73acce3fa","Type":"ContainerDied","Data":"b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892"} Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.446173 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlngd" event={"ID":"449c691d-7745-4598-947d-0db73acce3fa","Type":"ContainerDied","Data":"f28a06a7e7a18986916ae000d7ba1924ae17380bb81bb783142586895791c1bd"} Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.446197 5017 scope.go:117] "RemoveContainer" containerID="b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.498185 5017 scope.go:117] "RemoveContainer" containerID="b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.558891 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlngd"] Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.578257 5017 scope.go:117] "RemoveContainer" containerID="362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.607511 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qlngd"] Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.691510 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56768c9697-24rfp"] Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.712105 5017 scope.go:117] "RemoveContainer" containerID="b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892" Jan 29 08:07:37 crc kubenswrapper[5017]: E0129 08:07:37.717463 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892\": container with ID starting with b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892 not found: ID does not exist" containerID="b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.717524 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892"} err="failed to get container status \"b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892\": rpc error: code = NotFound desc = could not find container \"b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892\": container with ID starting with b0b4a9005b41b1c5c048ae94236be197109ee181f6d982c43c4bd517cd09c892 not found: ID does not exist" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.717556 5017 scope.go:117] "RemoveContainer" containerID="b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d" Jan 29 08:07:37 crc kubenswrapper[5017]: E0129 08:07:37.728242 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d\": container with ID starting with b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d not found: ID does not exist" containerID="b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.728309 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d"} err="failed to get container status \"b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d\": rpc error: code = NotFound desc = could not find container \"b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d\": container with ID starting with b8b9387b6ff87b6a22bc20fcfab69c2ca29bcb4014305417e13c4bd501b4055d not found: ID does not exist" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.728353 5017 scope.go:117] "RemoveContainer" containerID="362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa" Jan 29 08:07:37 crc kubenswrapper[5017]: E0129 08:07:37.741311 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa\": container with ID starting with 362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa not found: ID does not exist" containerID="362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa" Jan 29 08:07:37 crc kubenswrapper[5017]: I0129 08:07:37.741375 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa"} err="failed to get container status \"362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa\": rpc error: code = NotFound desc = could not find container \"362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa\": container with ID starting with 362a40b3132d4805e9185dce72bf1f6cc2f318043b6ee9fb6f371db809e978aa not found: ID does not exist" Jan 29 08:07:38 crc kubenswrapper[5017]: I0129 08:07:38.327477 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="449c691d-7745-4598-947d-0db73acce3fa" path="/var/lib/kubelet/pods/449c691d-7745-4598-947d-0db73acce3fa/volumes" Jan 29 08:07:38 crc kubenswrapper[5017]: I0129 08:07:38.456910 5017 generic.go:334] "Generic (PLEG): container finished" podID="afeb803e-6e7b-42f8-a146-7fbc986e4a80" containerID="0a13f4e750bcf0a1476748382baf3b1c5ad153d4017a836a98424095c67650ac" exitCode=0 Jan 29 08:07:38 crc kubenswrapper[5017]: I0129 08:07:38.457012 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56768c9697-24rfp" event={"ID":"afeb803e-6e7b-42f8-a146-7fbc986e4a80","Type":"ContainerDied","Data":"0a13f4e750bcf0a1476748382baf3b1c5ad153d4017a836a98424095c67650ac"} Jan 29 08:07:38 crc kubenswrapper[5017]: I0129 08:07:38.457090 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56768c9697-24rfp" event={"ID":"afeb803e-6e7b-42f8-a146-7fbc986e4a80","Type":"ContainerStarted","Data":"3e64e3fe35a614444327dcdeca53296ebb88dde6bd68ca594c0e60e19483a862"} Jan 29 08:07:38 crc kubenswrapper[5017]: I0129 08:07:38.678736 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 08:07:38 crc kubenswrapper[5017]: I0129 08:07:38.679283 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 08:07:38 crc kubenswrapper[5017]: I0129 08:07:38.683998 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 08:07:38 crc kubenswrapper[5017]: I0129 08:07:38.685002 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 08:07:39 crc kubenswrapper[5017]: I0129 08:07:39.469658 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56768c9697-24rfp" event={"ID":"afeb803e-6e7b-42f8-a146-7fbc986e4a80","Type":"ContainerStarted","Data":"0981f3a9b2ed6337208e31dbd910344ee3386643a36bb839d696682480fdeb01"} Jan 29 08:07:39 crc kubenswrapper[5017]: I0129 08:07:39.470184 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:39 crc kubenswrapper[5017]: I0129 08:07:39.490323 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56768c9697-24rfp" podStartSLOduration=3.490291581 podStartE2EDuration="3.490291581s" podCreationTimestamp="2026-01-29 08:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:39.48856814 +0000 UTC m=+5545.863015770" watchObservedRunningTime="2026-01-29 08:07:39.490291581 +0000 UTC m=+5545.864739191" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.024794 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.126087 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f46f5c5cf-27wvj"] Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.126339 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" podUID="0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" containerName="dnsmasq-dns" containerID="cri-o://641318345e9528276f2a62e8175252dd2f086dd4ddec6d7aa5b185053721d90e" gracePeriod=10 Jan 29 08:07:47 crc kubenswrapper[5017]: E0129 08:07:47.200405 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c5d06cc_0a4b_41cd_bf60_ff61ac5027fa.slice/crio-641318345e9528276f2a62e8175252dd2f086dd4ddec6d7aa5b185053721d90e.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.567679 5017 generic.go:334] "Generic (PLEG): container finished" podID="0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" containerID="641318345e9528276f2a62e8175252dd2f086dd4ddec6d7aa5b185053721d90e" exitCode=0 Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.567751 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" event={"ID":"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa","Type":"ContainerDied","Data":"641318345e9528276f2a62e8175252dd2f086dd4ddec6d7aa5b185053721d90e"} Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.674208 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.684345 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxshv\" (UniqueName: \"kubernetes.io/projected/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-kube-api-access-zxshv\") pod \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.684471 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-sb\") pod \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.684564 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-dns-svc\") pod \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.684631 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-config\") pod \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.684693 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-nb\") pod \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\" (UID: \"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa\") " Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.693167 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-kube-api-access-zxshv" (OuterVolumeSpecName: "kube-api-access-zxshv") pod "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" (UID: "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa"). InnerVolumeSpecName "kube-api-access-zxshv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.784792 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-config" (OuterVolumeSpecName: "config") pod "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" (UID: "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.787105 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxshv\" (UniqueName: \"kubernetes.io/projected/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-kube-api-access-zxshv\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.787331 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.788033 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" (UID: "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.789245 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" (UID: "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.791943 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" (UID: "0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.888242 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.888282 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:47 crc kubenswrapper[5017]: I0129 08:07:47.888295 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:48 crc kubenswrapper[5017]: I0129 08:07:48.581764 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" event={"ID":"0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa","Type":"ContainerDied","Data":"fae7c2935e94aa1cf2a3bd2e0cff279432da0dcc06770c9cbe8e42067d670b0f"} Jan 29 08:07:48 crc kubenswrapper[5017]: I0129 08:07:48.586810 5017 scope.go:117] "RemoveContainer" containerID="641318345e9528276f2a62e8175252dd2f086dd4ddec6d7aa5b185053721d90e" Jan 29 08:07:48 crc kubenswrapper[5017]: I0129 08:07:48.581862 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f46f5c5cf-27wvj" Jan 29 08:07:48 crc kubenswrapper[5017]: I0129 08:07:48.629650 5017 scope.go:117] "RemoveContainer" containerID="533cf16622fbdf3b777c06941708592fcf3f3b4741ffca78a2aec42cb3737f8c" Jan 29 08:07:48 crc kubenswrapper[5017]: I0129 08:07:48.643008 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f46f5c5cf-27wvj"] Jan 29 08:07:48 crc kubenswrapper[5017]: I0129 08:07:48.674079 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f46f5c5cf-27wvj"] Jan 29 08:07:50 crc kubenswrapper[5017]: I0129 08:07:50.341581 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" path="/var/lib/kubelet/pods/0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa/volumes" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.063596 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7n55f"] Jan 29 08:07:51 crc kubenswrapper[5017]: E0129 08:07:51.064775 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" containerName="dnsmasq-dns" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.064804 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" containerName="dnsmasq-dns" Jan 29 08:07:51 crc kubenswrapper[5017]: E0129 08:07:51.064827 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449c691d-7745-4598-947d-0db73acce3fa" containerName="extract-content" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.064835 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="449c691d-7745-4598-947d-0db73acce3fa" containerName="extract-content" Jan 29 08:07:51 crc kubenswrapper[5017]: E0129 08:07:51.064848 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" containerName="init" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.064856 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" containerName="init" Jan 29 08:07:51 crc kubenswrapper[5017]: E0129 08:07:51.064866 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449c691d-7745-4598-947d-0db73acce3fa" containerName="extract-utilities" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.064872 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="449c691d-7745-4598-947d-0db73acce3fa" containerName="extract-utilities" Jan 29 08:07:51 crc kubenswrapper[5017]: E0129 08:07:51.064883 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449c691d-7745-4598-947d-0db73acce3fa" containerName="registry-server" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.064889 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="449c691d-7745-4598-947d-0db73acce3fa" containerName="registry-server" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.065103 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5d06cc-0a4b-41cd-bf60-ff61ac5027fa" containerName="dnsmasq-dns" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.065125 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="449c691d-7745-4598-947d-0db73acce3fa" containerName="registry-server" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.065890 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.085178 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7n55f"] Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.175927 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-522kn\" (UniqueName: \"kubernetes.io/projected/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-kube-api-access-522kn\") pod \"cinder-db-create-7n55f\" (UID: \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\") " pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.179899 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-operator-scripts\") pod \"cinder-db-create-7n55f\" (UID: \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\") " pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.212091 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e65f-account-create-update-rxnc4"] Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.213843 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.216593 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.221799 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e65f-account-create-update-rxnc4"] Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.282773 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-operator-scripts\") pod \"cinder-e65f-account-create-update-rxnc4\" (UID: \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\") " pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.282870 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-522kn\" (UniqueName: \"kubernetes.io/projected/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-kube-api-access-522kn\") pod \"cinder-db-create-7n55f\" (UID: \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\") " pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.282918 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-operator-scripts\") pod \"cinder-db-create-7n55f\" (UID: \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\") " pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.282990 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt77c\" (UniqueName: \"kubernetes.io/projected/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-kube-api-access-qt77c\") pod \"cinder-e65f-account-create-update-rxnc4\" (UID: \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\") " pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.284073 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-operator-scripts\") pod \"cinder-db-create-7n55f\" (UID: \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\") " pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.304233 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-522kn\" (UniqueName: \"kubernetes.io/projected/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-kube-api-access-522kn\") pod \"cinder-db-create-7n55f\" (UID: \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\") " pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.386047 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt77c\" (UniqueName: \"kubernetes.io/projected/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-kube-api-access-qt77c\") pod \"cinder-e65f-account-create-update-rxnc4\" (UID: \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\") " pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.386275 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-operator-scripts\") pod \"cinder-e65f-account-create-update-rxnc4\" (UID: \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\") " pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.387386 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-operator-scripts\") pod \"cinder-e65f-account-create-update-rxnc4\" (UID: \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\") " pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.406394 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt77c\" (UniqueName: \"kubernetes.io/projected/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-kube-api-access-qt77c\") pod \"cinder-e65f-account-create-update-rxnc4\" (UID: \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\") " pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.448630 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.535468 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:51 crc kubenswrapper[5017]: I0129 08:07:51.946098 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7n55f"] Jan 29 08:07:51 crc kubenswrapper[5017]: W0129 08:07:51.948032 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67325a2f_9d88_4646_8f9b_dc89bc6a5bc7.slice/crio-e71ff2ce5ed804dd34c4279d277745a5d46d0d30938e1adfb352b38deda3a3e9 WatchSource:0}: Error finding container e71ff2ce5ed804dd34c4279d277745a5d46d0d30938e1adfb352b38deda3a3e9: Status 404 returned error can't find the container with id e71ff2ce5ed804dd34c4279d277745a5d46d0d30938e1adfb352b38deda3a3e9 Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.044897 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e65f-account-create-update-rxnc4"] Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.636846 5017 generic.go:334] "Generic (PLEG): container finished" podID="30ad5517-2f39-4db6-8d8a-c5a23e84d1b0" containerID="28c777e7a6deb5259673a98329cccb5c4fe3fc6b317c4c64fb390fb4eed9a135" exitCode=0 Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.636981 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e65f-account-create-update-rxnc4" event={"ID":"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0","Type":"ContainerDied","Data":"28c777e7a6deb5259673a98329cccb5c4fe3fc6b317c4c64fb390fb4eed9a135"} Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.637028 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e65f-account-create-update-rxnc4" event={"ID":"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0","Type":"ContainerStarted","Data":"31fcc891e097de013e289cc870b525b37dcea61b36cee922e98634cfd6ebd5a2"} Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.640090 5017 generic.go:334] "Generic (PLEG): container finished" podID="67325a2f-9d88-4646-8f9b-dc89bc6a5bc7" containerID="cd74e50eb115f3784d30a55d5e24398991e4db24d37dba35cb5a3e374c5bb4ef" exitCode=0 Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.640131 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7n55f" event={"ID":"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7","Type":"ContainerDied","Data":"cd74e50eb115f3784d30a55d5e24398991e4db24d37dba35cb5a3e374c5bb4ef"} Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.640157 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7n55f" event={"ID":"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7","Type":"ContainerStarted","Data":"e71ff2ce5ed804dd34c4279d277745a5d46d0d30938e1adfb352b38deda3a3e9"} Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.880374 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c88cz"] Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.882566 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:52 crc kubenswrapper[5017]: I0129 08:07:52.900190 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c88cz"] Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.027554 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v8jm\" (UniqueName: \"kubernetes.io/projected/70b68d00-f08d-4361-8f41-4d14d9fabe0a-kube-api-access-6v8jm\") pod \"certified-operators-c88cz\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.027666 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-catalog-content\") pod \"certified-operators-c88cz\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.027799 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-utilities\") pod \"certified-operators-c88cz\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.130494 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v8jm\" (UniqueName: \"kubernetes.io/projected/70b68d00-f08d-4361-8f41-4d14d9fabe0a-kube-api-access-6v8jm\") pod \"certified-operators-c88cz\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.130560 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-catalog-content\") pod \"certified-operators-c88cz\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.130616 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-utilities\") pod \"certified-operators-c88cz\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.131302 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-utilities\") pod \"certified-operators-c88cz\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.131445 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-catalog-content\") pod \"certified-operators-c88cz\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.155288 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v8jm\" (UniqueName: \"kubernetes.io/projected/70b68d00-f08d-4361-8f41-4d14d9fabe0a-kube-api-access-6v8jm\") pod \"certified-operators-c88cz\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.210819 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:07:53 crc kubenswrapper[5017]: I0129 08:07:53.761344 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c88cz"] Jan 29 08:07:53 crc kubenswrapper[5017]: W0129 08:07:53.779640 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70b68d00_f08d_4361_8f41_4d14d9fabe0a.slice/crio-b4ac042795e932af8df5afe27922a91e0a126a6cb7b9e5cfc72f2d18d0c01ceb WatchSource:0}: Error finding container b4ac042795e932af8df5afe27922a91e0a126a6cb7b9e5cfc72f2d18d0c01ceb: Status 404 returned error can't find the container with id b4ac042795e932af8df5afe27922a91e0a126a6cb7b9e5cfc72f2d18d0c01ceb Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.234502 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.240991 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.377380 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt77c\" (UniqueName: \"kubernetes.io/projected/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-kube-api-access-qt77c\") pod \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\" (UID: \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\") " Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.377605 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-operator-scripts\") pod \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\" (UID: \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\") " Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.377888 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-operator-scripts\") pod \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\" (UID: \"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0\") " Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.377933 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-522kn\" (UniqueName: \"kubernetes.io/projected/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-kube-api-access-522kn\") pod \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\" (UID: \"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7\") " Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.379323 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67325a2f-9d88-4646-8f9b-dc89bc6a5bc7" (UID: "67325a2f-9d88-4646-8f9b-dc89bc6a5bc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.379335 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30ad5517-2f39-4db6-8d8a-c5a23e84d1b0" (UID: "30ad5517-2f39-4db6-8d8a-c5a23e84d1b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.386053 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-kube-api-access-522kn" (OuterVolumeSpecName: "kube-api-access-522kn") pod "67325a2f-9d88-4646-8f9b-dc89bc6a5bc7" (UID: "67325a2f-9d88-4646-8f9b-dc89bc6a5bc7"). InnerVolumeSpecName "kube-api-access-522kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.388461 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-kube-api-access-qt77c" (OuterVolumeSpecName: "kube-api-access-qt77c") pod "30ad5517-2f39-4db6-8d8a-c5a23e84d1b0" (UID: "30ad5517-2f39-4db6-8d8a-c5a23e84d1b0"). InnerVolumeSpecName "kube-api-access-qt77c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.480729 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.480776 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-522kn\" (UniqueName: \"kubernetes.io/projected/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-kube-api-access-522kn\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.480797 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt77c\" (UniqueName: \"kubernetes.io/projected/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0-kube-api-access-qt77c\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.480810 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.676378 5017 generic.go:334] "Generic (PLEG): container finished" podID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerID="a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66" exitCode=0 Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.676472 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c88cz" event={"ID":"70b68d00-f08d-4361-8f41-4d14d9fabe0a","Type":"ContainerDied","Data":"a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66"} Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.676560 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c88cz" event={"ID":"70b68d00-f08d-4361-8f41-4d14d9fabe0a","Type":"ContainerStarted","Data":"b4ac042795e932af8df5afe27922a91e0a126a6cb7b9e5cfc72f2d18d0c01ceb"} Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.680067 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e65f-account-create-update-rxnc4" event={"ID":"30ad5517-2f39-4db6-8d8a-c5a23e84d1b0","Type":"ContainerDied","Data":"31fcc891e097de013e289cc870b525b37dcea61b36cee922e98634cfd6ebd5a2"} Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.680115 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31fcc891e097de013e289cc870b525b37dcea61b36cee922e98634cfd6ebd5a2" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.680215 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e65f-account-create-update-rxnc4" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.686346 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7n55f" event={"ID":"67325a2f-9d88-4646-8f9b-dc89bc6a5bc7","Type":"ContainerDied","Data":"e71ff2ce5ed804dd34c4279d277745a5d46d0d30938e1adfb352b38deda3a3e9"} Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.686398 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e71ff2ce5ed804dd34c4279d277745a5d46d0d30938e1adfb352b38deda3a3e9" Jan 29 08:07:54 crc kubenswrapper[5017]: I0129 08:07:54.686487 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7n55f" Jan 29 08:07:55 crc kubenswrapper[5017]: I0129 08:07:55.703304 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c88cz" event={"ID":"70b68d00-f08d-4361-8f41-4d14d9fabe0a","Type":"ContainerStarted","Data":"d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981"} Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.475180 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tbbs6"] Jan 29 08:07:56 crc kubenswrapper[5017]: E0129 08:07:56.476677 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67325a2f-9d88-4646-8f9b-dc89bc6a5bc7" containerName="mariadb-database-create" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.476785 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="67325a2f-9d88-4646-8f9b-dc89bc6a5bc7" containerName="mariadb-database-create" Jan 29 08:07:56 crc kubenswrapper[5017]: E0129 08:07:56.476899 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ad5517-2f39-4db6-8d8a-c5a23e84d1b0" containerName="mariadb-account-create-update" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.477003 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ad5517-2f39-4db6-8d8a-c5a23e84d1b0" containerName="mariadb-account-create-update" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.477402 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ad5517-2f39-4db6-8d8a-c5a23e84d1b0" containerName="mariadb-account-create-update" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.477504 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="67325a2f-9d88-4646-8f9b-dc89bc6a5bc7" containerName="mariadb-database-create" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.478510 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.481450 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.481544 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.481687 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4fmk9" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.491000 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tbbs6"] Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.629858 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-config-data\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.629914 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8twcr\" (UniqueName: \"kubernetes.io/projected/1ff65104-5204-4b4f-9251-cfd45cfe7b71-kube-api-access-8twcr\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.629950 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-db-sync-config-data\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.630065 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ff65104-5204-4b4f-9251-cfd45cfe7b71-etc-machine-id\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.630117 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-scripts\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.630206 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-combined-ca-bundle\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.716530 5017 generic.go:334] "Generic (PLEG): container finished" podID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerID="d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981" exitCode=0 Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.716671 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c88cz" event={"ID":"70b68d00-f08d-4361-8f41-4d14d9fabe0a","Type":"ContainerDied","Data":"d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981"} Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.733072 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-combined-ca-bundle\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.733202 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-config-data\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.733252 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8twcr\" (UniqueName: \"kubernetes.io/projected/1ff65104-5204-4b4f-9251-cfd45cfe7b71-kube-api-access-8twcr\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.733295 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-db-sync-config-data\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.733347 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ff65104-5204-4b4f-9251-cfd45cfe7b71-etc-machine-id\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.733398 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-scripts\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.733589 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ff65104-5204-4b4f-9251-cfd45cfe7b71-etc-machine-id\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.740259 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-db-sync-config-data\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.740267 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-combined-ca-bundle\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.740379 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-config-data\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.751461 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-scripts\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.754352 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8twcr\" (UniqueName: \"kubernetes.io/projected/1ff65104-5204-4b4f-9251-cfd45cfe7b71-kube-api-access-8twcr\") pod \"cinder-db-sync-tbbs6\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:56 crc kubenswrapper[5017]: I0129 08:07:56.796781 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:07:57 crc kubenswrapper[5017]: I0129 08:07:57.355830 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tbbs6"] Jan 29 08:07:57 crc kubenswrapper[5017]: W0129 08:07:57.364525 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ff65104_5204_4b4f_9251_cfd45cfe7b71.slice/crio-646649db27f21e52179f1412144bce185ff5e69e76a6c761c8c4ea73105e0985 WatchSource:0}: Error finding container 646649db27f21e52179f1412144bce185ff5e69e76a6c761c8c4ea73105e0985: Status 404 returned error can't find the container with id 646649db27f21e52179f1412144bce185ff5e69e76a6c761c8c4ea73105e0985 Jan 29 08:07:57 crc kubenswrapper[5017]: I0129 08:07:57.742726 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c88cz" event={"ID":"70b68d00-f08d-4361-8f41-4d14d9fabe0a","Type":"ContainerStarted","Data":"37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737"} Jan 29 08:07:57 crc kubenswrapper[5017]: I0129 08:07:57.746613 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tbbs6" event={"ID":"1ff65104-5204-4b4f-9251-cfd45cfe7b71","Type":"ContainerStarted","Data":"646649db27f21e52179f1412144bce185ff5e69e76a6c761c8c4ea73105e0985"} Jan 29 08:07:57 crc kubenswrapper[5017]: I0129 08:07:57.770682 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c88cz" podStartSLOduration=3.289670342 podStartE2EDuration="5.770651708s" podCreationTimestamp="2026-01-29 08:07:52 +0000 UTC" firstStartedPulling="2026-01-29 08:07:54.679318669 +0000 UTC m=+5561.053766279" lastFinishedPulling="2026-01-29 08:07:57.160300035 +0000 UTC m=+5563.534747645" observedRunningTime="2026-01-29 08:07:57.763424795 +0000 UTC m=+5564.137872405" watchObservedRunningTime="2026-01-29 08:07:57.770651708 +0000 UTC m=+5564.145099328" Jan 29 08:07:58 crc kubenswrapper[5017]: I0129 08:07:58.764243 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tbbs6" event={"ID":"1ff65104-5204-4b4f-9251-cfd45cfe7b71","Type":"ContainerStarted","Data":"c31fabd799b8d668223c98e2dd65324b76482fcf60a22f80ae1c538731afd27b"} Jan 29 08:07:58 crc kubenswrapper[5017]: I0129 08:07:58.788709 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tbbs6" podStartSLOduration=2.78868318 podStartE2EDuration="2.78868318s" podCreationTimestamp="2026-01-29 08:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:58.784434458 +0000 UTC m=+5565.158882068" watchObservedRunningTime="2026-01-29 08:07:58.78868318 +0000 UTC m=+5565.163130800" Jan 29 08:08:01 crc kubenswrapper[5017]: I0129 08:08:01.801159 5017 generic.go:334] "Generic (PLEG): container finished" podID="1ff65104-5204-4b4f-9251-cfd45cfe7b71" containerID="c31fabd799b8d668223c98e2dd65324b76482fcf60a22f80ae1c538731afd27b" exitCode=0 Jan 29 08:08:01 crc kubenswrapper[5017]: I0129 08:08:01.801270 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tbbs6" event={"ID":"1ff65104-5204-4b4f-9251-cfd45cfe7b71","Type":"ContainerDied","Data":"c31fabd799b8d668223c98e2dd65324b76482fcf60a22f80ae1c538731afd27b"} Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.152718 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.212028 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.213314 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.270926 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.327380 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ff65104-5204-4b4f-9251-cfd45cfe7b71-etc-machine-id\") pod \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.327540 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ff65104-5204-4b4f-9251-cfd45cfe7b71-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1ff65104-5204-4b4f-9251-cfd45cfe7b71" (UID: "1ff65104-5204-4b4f-9251-cfd45cfe7b71"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.327698 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8twcr\" (UniqueName: \"kubernetes.io/projected/1ff65104-5204-4b4f-9251-cfd45cfe7b71-kube-api-access-8twcr\") pod \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.327831 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-config-data\") pod \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.327853 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-scripts\") pod \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.327891 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-db-sync-config-data\") pod \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.327950 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-combined-ca-bundle\") pod \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\" (UID: \"1ff65104-5204-4b4f-9251-cfd45cfe7b71\") " Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.328512 5017 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ff65104-5204-4b4f-9251-cfd45cfe7b71-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.335293 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff65104-5204-4b4f-9251-cfd45cfe7b71-kube-api-access-8twcr" (OuterVolumeSpecName: "kube-api-access-8twcr") pod "1ff65104-5204-4b4f-9251-cfd45cfe7b71" (UID: "1ff65104-5204-4b4f-9251-cfd45cfe7b71"). InnerVolumeSpecName "kube-api-access-8twcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.335643 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-scripts" (OuterVolumeSpecName: "scripts") pod "1ff65104-5204-4b4f-9251-cfd45cfe7b71" (UID: "1ff65104-5204-4b4f-9251-cfd45cfe7b71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.335813 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1ff65104-5204-4b4f-9251-cfd45cfe7b71" (UID: "1ff65104-5204-4b4f-9251-cfd45cfe7b71"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.377067 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ff65104-5204-4b4f-9251-cfd45cfe7b71" (UID: "1ff65104-5204-4b4f-9251-cfd45cfe7b71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.381704 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-config-data" (OuterVolumeSpecName: "config-data") pod "1ff65104-5204-4b4f-9251-cfd45cfe7b71" (UID: "1ff65104-5204-4b4f-9251-cfd45cfe7b71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.430497 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8twcr\" (UniqueName: \"kubernetes.io/projected/1ff65104-5204-4b4f-9251-cfd45cfe7b71-kube-api-access-8twcr\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.430915 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.430932 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.430973 5017 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.431006 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff65104-5204-4b4f-9251-cfd45cfe7b71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.826076 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tbbs6" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.826342 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tbbs6" event={"ID":"1ff65104-5204-4b4f-9251-cfd45cfe7b71","Type":"ContainerDied","Data":"646649db27f21e52179f1412144bce185ff5e69e76a6c761c8c4ea73105e0985"} Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.826433 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="646649db27f21e52179f1412144bce185ff5e69e76a6c761c8c4ea73105e0985" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.900538 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:08:03 crc kubenswrapper[5017]: I0129 08:08:03.960712 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c88cz"] Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.305723 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6575ddf6cf-qhgvn"] Jan 29 08:08:04 crc kubenswrapper[5017]: E0129 08:08:04.306413 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff65104-5204-4b4f-9251-cfd45cfe7b71" containerName="cinder-db-sync" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.306437 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff65104-5204-4b4f-9251-cfd45cfe7b71" containerName="cinder-db-sync" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.306716 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff65104-5204-4b4f-9251-cfd45cfe7b71" containerName="cinder-db-sync" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.308192 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.351096 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6575ddf6cf-qhgvn"] Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.433106 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.434863 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.438439 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.441196 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.441430 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4fmk9" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.441505 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.452100 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.460628 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-sb\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.460706 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-dns-svc\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.460739 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-config\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.460864 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-nb\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.460919 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtdxb\" (UniqueName: \"kubernetes.io/projected/9a2bc012-7119-4c7b-b236-e508f10b47c1-kube-api-access-wtdxb\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563269 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563351 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80191ba1-9557-4150-b887-1262b7541638-etc-machine-id\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563396 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563441 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-scripts\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563481 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-sb\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563504 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80191ba1-9557-4150-b887-1262b7541638-logs\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563543 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-dns-svc\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563568 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-config\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563594 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv56n\" (UniqueName: \"kubernetes.io/projected/80191ba1-9557-4150-b887-1262b7541638-kube-api-access-tv56n\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563618 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data-custom\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563645 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-nb\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.563664 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtdxb\" (UniqueName: \"kubernetes.io/projected/9a2bc012-7119-4c7b-b236-e508f10b47c1-kube-api-access-wtdxb\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.564764 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-nb\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.564761 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-sb\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.564834 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-dns-svc\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.565255 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-config\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.587075 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtdxb\" (UniqueName: \"kubernetes.io/projected/9a2bc012-7119-4c7b-b236-e508f10b47c1-kube-api-access-wtdxb\") pod \"dnsmasq-dns-6575ddf6cf-qhgvn\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.645854 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.670488 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-scripts\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.670573 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80191ba1-9557-4150-b887-1262b7541638-logs\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.670645 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv56n\" (UniqueName: \"kubernetes.io/projected/80191ba1-9557-4150-b887-1262b7541638-kube-api-access-tv56n\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.670676 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data-custom\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.670713 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.670753 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80191ba1-9557-4150-b887-1262b7541638-etc-machine-id\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.670795 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.671054 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80191ba1-9557-4150-b887-1262b7541638-etc-machine-id\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.671729 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80191ba1-9557-4150-b887-1262b7541638-logs\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.682026 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-scripts\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.682243 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.691809 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data-custom\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.691816 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.714067 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv56n\" (UniqueName: \"kubernetes.io/projected/80191ba1-9557-4150-b887-1262b7541638-kube-api-access-tv56n\") pod \"cinder-api-0\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " pod="openstack/cinder-api-0" Jan 29 08:08:04 crc kubenswrapper[5017]: I0129 08:08:04.761149 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:08:05 crc kubenswrapper[5017]: I0129 08:08:05.293722 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6575ddf6cf-qhgvn"] Jan 29 08:08:05 crc kubenswrapper[5017]: W0129 08:08:05.370719 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80191ba1_9557_4150_b887_1262b7541638.slice/crio-c84a182c6241313bf2726b2ede775aba9f3ac006a4bd0f414607a3a6e78ca216 WatchSource:0}: Error finding container c84a182c6241313bf2726b2ede775aba9f3ac006a4bd0f414607a3a6e78ca216: Status 404 returned error can't find the container with id c84a182c6241313bf2726b2ede775aba9f3ac006a4bd0f414607a3a6e78ca216 Jan 29 08:08:05 crc kubenswrapper[5017]: I0129 08:08:05.371313 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:08:05 crc kubenswrapper[5017]: I0129 08:08:05.882086 5017 generic.go:334] "Generic (PLEG): container finished" podID="9a2bc012-7119-4c7b-b236-e508f10b47c1" containerID="2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b" exitCode=0 Jan 29 08:08:05 crc kubenswrapper[5017]: I0129 08:08:05.882197 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" event={"ID":"9a2bc012-7119-4c7b-b236-e508f10b47c1","Type":"ContainerDied","Data":"2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b"} Jan 29 08:08:05 crc kubenswrapper[5017]: I0129 08:08:05.882455 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" event={"ID":"9a2bc012-7119-4c7b-b236-e508f10b47c1","Type":"ContainerStarted","Data":"4b170ace5d82b750333e634c5a47191494ecf455ef99370743b9da97a60ebd5e"} Jan 29 08:08:05 crc kubenswrapper[5017]: I0129 08:08:05.884426 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80191ba1-9557-4150-b887-1262b7541638","Type":"ContainerStarted","Data":"c84a182c6241313bf2726b2ede775aba9f3ac006a4bd0f414607a3a6e78ca216"} Jan 29 08:08:05 crc kubenswrapper[5017]: I0129 08:08:05.884491 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c88cz" podUID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerName="registry-server" containerID="cri-o://37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737" gracePeriod=2 Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.419999 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.513397 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-utilities\") pod \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.516795 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-utilities" (OuterVolumeSpecName: "utilities") pod "70b68d00-f08d-4361-8f41-4d14d9fabe0a" (UID: "70b68d00-f08d-4361-8f41-4d14d9fabe0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.517269 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-catalog-content\") pod \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.526617 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v8jm\" (UniqueName: \"kubernetes.io/projected/70b68d00-f08d-4361-8f41-4d14d9fabe0a-kube-api-access-6v8jm\") pod \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\" (UID: \"70b68d00-f08d-4361-8f41-4d14d9fabe0a\") " Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.528458 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.530864 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b68d00-f08d-4361-8f41-4d14d9fabe0a-kube-api-access-6v8jm" (OuterVolumeSpecName: "kube-api-access-6v8jm") pod "70b68d00-f08d-4361-8f41-4d14d9fabe0a" (UID: "70b68d00-f08d-4361-8f41-4d14d9fabe0a"). InnerVolumeSpecName "kube-api-access-6v8jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.576251 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70b68d00-f08d-4361-8f41-4d14d9fabe0a" (UID: "70b68d00-f08d-4361-8f41-4d14d9fabe0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.633639 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v8jm\" (UniqueName: \"kubernetes.io/projected/70b68d00-f08d-4361-8f41-4d14d9fabe0a-kube-api-access-6v8jm\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.633841 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70b68d00-f08d-4361-8f41-4d14d9fabe0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.899055 5017 generic.go:334] "Generic (PLEG): container finished" podID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerID="37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737" exitCode=0 Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.899126 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c88cz" event={"ID":"70b68d00-f08d-4361-8f41-4d14d9fabe0a","Type":"ContainerDied","Data":"37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737"} Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.899617 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c88cz" event={"ID":"70b68d00-f08d-4361-8f41-4d14d9fabe0a","Type":"ContainerDied","Data":"b4ac042795e932af8df5afe27922a91e0a126a6cb7b9e5cfc72f2d18d0c01ceb"} Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.899175 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c88cz" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.899645 5017 scope.go:117] "RemoveContainer" containerID="37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.903295 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80191ba1-9557-4150-b887-1262b7541638","Type":"ContainerStarted","Data":"96475002857cb29bf1bce145fc27c2a6b5f6203507419ba17c55b71218a0ebf5"} Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.906391 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" event={"ID":"9a2bc012-7119-4c7b-b236-e508f10b47c1","Type":"ContainerStarted","Data":"33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58"} Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.906602 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.933087 5017 scope.go:117] "RemoveContainer" containerID="d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.957500 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" podStartSLOduration=2.957472891 podStartE2EDuration="2.957472891s" podCreationTimestamp="2026-01-29 08:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:06.932901412 +0000 UTC m=+5573.307349032" watchObservedRunningTime="2026-01-29 08:08:06.957472891 +0000 UTC m=+5573.331920501" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.971481 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c88cz"] Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.977590 5017 scope.go:117] "RemoveContainer" containerID="a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66" Jan 29 08:08:06 crc kubenswrapper[5017]: I0129 08:08:06.981238 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c88cz"] Jan 29 08:08:07 crc kubenswrapper[5017]: I0129 08:08:07.042917 5017 scope.go:117] "RemoveContainer" containerID="37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737" Jan 29 08:08:07 crc kubenswrapper[5017]: E0129 08:08:07.043659 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737\": container with ID starting with 37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737 not found: ID does not exist" containerID="37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737" Jan 29 08:08:07 crc kubenswrapper[5017]: I0129 08:08:07.043729 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737"} err="failed to get container status \"37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737\": rpc error: code = NotFound desc = could not find container \"37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737\": container with ID starting with 37630de230186da647e6694a6698ae7bc37100a1e2530a8a71318dcb81d18737 not found: ID does not exist" Jan 29 08:08:07 crc kubenswrapper[5017]: I0129 08:08:07.043769 5017 scope.go:117] "RemoveContainer" containerID="d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981" Jan 29 08:08:07 crc kubenswrapper[5017]: E0129 08:08:07.044581 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981\": container with ID starting with d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981 not found: ID does not exist" containerID="d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981" Jan 29 08:08:07 crc kubenswrapper[5017]: I0129 08:08:07.044630 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981"} err="failed to get container status \"d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981\": rpc error: code = NotFound desc = could not find container \"d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981\": container with ID starting with d10296d9351a6bdb0429102ce70aaf9c31c6b4c155287a83f13a33efc6b15981 not found: ID does not exist" Jan 29 08:08:07 crc kubenswrapper[5017]: I0129 08:08:07.044654 5017 scope.go:117] "RemoveContainer" containerID="a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66" Jan 29 08:08:07 crc kubenswrapper[5017]: E0129 08:08:07.045301 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66\": container with ID starting with a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66 not found: ID does not exist" containerID="a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66" Jan 29 08:08:07 crc kubenswrapper[5017]: I0129 08:08:07.045327 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66"} err="failed to get container status \"a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66\": rpc error: code = NotFound desc = could not find container \"a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66\": container with ID starting with a4225c7ddb41f5a8274c04b34ddae754a6fb95c531660d9abb937e547c317c66 not found: ID does not exist" Jan 29 08:08:07 crc kubenswrapper[5017]: I0129 08:08:07.923398 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80191ba1-9557-4150-b887-1262b7541638","Type":"ContainerStarted","Data":"6695a6cf07f027bca5039813c42d405f98075e69255b726af24f9f0237b5b9fe"} Jan 29 08:08:07 crc kubenswrapper[5017]: I0129 08:08:07.924207 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 08:08:07 crc kubenswrapper[5017]: I0129 08:08:07.951864 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.951840276 podStartE2EDuration="3.951840276s" podCreationTimestamp="2026-01-29 08:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:07.9507858 +0000 UTC m=+5574.325233430" watchObservedRunningTime="2026-01-29 08:08:07.951840276 +0000 UTC m=+5574.326287886" Jan 29 08:08:08 crc kubenswrapper[5017]: I0129 08:08:08.328083 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" path="/var/lib/kubelet/pods/70b68d00-f08d-4361-8f41-4d14d9fabe0a/volumes" Jan 29 08:08:14 crc kubenswrapper[5017]: I0129 08:08:14.647265 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:08:14 crc kubenswrapper[5017]: I0129 08:08:14.718919 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56768c9697-24rfp"] Jan 29 08:08:14 crc kubenswrapper[5017]: I0129 08:08:14.719783 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56768c9697-24rfp" podUID="afeb803e-6e7b-42f8-a146-7fbc986e4a80" containerName="dnsmasq-dns" containerID="cri-o://0981f3a9b2ed6337208e31dbd910344ee3386643a36bb839d696682480fdeb01" gracePeriod=10 Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.014618 5017 generic.go:334] "Generic (PLEG): container finished" podID="afeb803e-6e7b-42f8-a146-7fbc986e4a80" containerID="0981f3a9b2ed6337208e31dbd910344ee3386643a36bb839d696682480fdeb01" exitCode=0 Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.014700 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56768c9697-24rfp" event={"ID":"afeb803e-6e7b-42f8-a146-7fbc986e4a80","Type":"ContainerDied","Data":"0981f3a9b2ed6337208e31dbd910344ee3386643a36bb839d696682480fdeb01"} Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.343797 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.461319 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-dns-svc\") pod \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.461431 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s5xk\" (UniqueName: \"kubernetes.io/projected/afeb803e-6e7b-42f8-a146-7fbc986e4a80-kube-api-access-2s5xk\") pod \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.461670 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-nb\") pod \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.461704 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-config\") pod \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.461750 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-sb\") pod \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\" (UID: \"afeb803e-6e7b-42f8-a146-7fbc986e4a80\") " Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.470429 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afeb803e-6e7b-42f8-a146-7fbc986e4a80-kube-api-access-2s5xk" (OuterVolumeSpecName: "kube-api-access-2s5xk") pod "afeb803e-6e7b-42f8-a146-7fbc986e4a80" (UID: "afeb803e-6e7b-42f8-a146-7fbc986e4a80"). InnerVolumeSpecName "kube-api-access-2s5xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.515253 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "afeb803e-6e7b-42f8-a146-7fbc986e4a80" (UID: "afeb803e-6e7b-42f8-a146-7fbc986e4a80"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.542030 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-config" (OuterVolumeSpecName: "config") pod "afeb803e-6e7b-42f8-a146-7fbc986e4a80" (UID: "afeb803e-6e7b-42f8-a146-7fbc986e4a80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.551899 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "afeb803e-6e7b-42f8-a146-7fbc986e4a80" (UID: "afeb803e-6e7b-42f8-a146-7fbc986e4a80"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.563734 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.563781 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s5xk\" (UniqueName: \"kubernetes.io/projected/afeb803e-6e7b-42f8-a146-7fbc986e4a80-kube-api-access-2s5xk\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.563797 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.563812 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.566510 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "afeb803e-6e7b-42f8-a146-7fbc986e4a80" (UID: "afeb803e-6e7b-42f8-a146-7fbc986e4a80"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:15 crc kubenswrapper[5017]: I0129 08:08:15.665625 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afeb803e-6e7b-42f8-a146-7fbc986e4a80-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.025796 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56768c9697-24rfp" event={"ID":"afeb803e-6e7b-42f8-a146-7fbc986e4a80","Type":"ContainerDied","Data":"3e64e3fe35a614444327dcdeca53296ebb88dde6bd68ca594c0e60e19483a862"} Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.026309 5017 scope.go:117] "RemoveContainer" containerID="0981f3a9b2ed6337208e31dbd910344ee3386643a36bb839d696682480fdeb01" Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.025880 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56768c9697-24rfp" Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.056518 5017 scope.go:117] "RemoveContainer" containerID="0a13f4e750bcf0a1476748382baf3b1c5ad153d4017a836a98424095c67650ac" Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.068721 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56768c9697-24rfp"] Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.078456 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56768c9697-24rfp"] Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.333257 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afeb803e-6e7b-42f8-a146-7fbc986e4a80" path="/var/lib/kubelet/pods/afeb803e-6e7b-42f8-a146-7fbc986e4a80/volumes" Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.344060 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.383733 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-log" containerID="cri-o://282b0d36dfa465cecc6e292600dda239b9da90c42c0091e35a8c4e10eab4296c" gracePeriod=30 Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.383973 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-api" containerID="cri-o://f2269f40906caac53176c413f7b85fd16c81dcd3c9ac2beae85ded2767b45b89" gracePeriod=30 Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.409896 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.410424 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="6aa9495b-d470-41ce-b861-c410fc4e8aaf" containerName="nova-cell0-conductor-conductor" containerID="cri-o://d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985" gracePeriod=30 Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.445673 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.446330 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-log" containerID="cri-o://4365d3d9f73fb118fa6d5b7d6dd4f02f76ed3b0ef373e8fbec3379bc9fa32db8" gracePeriod=30 Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.446884 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-metadata" containerID="cri-o://dcd0b9896f21e9a2551116172d6a8327bc3b1ab3cee5858973770899221a43d8" gracePeriod=30 Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.473728 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.474138 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0bd61008-f300-4b3d-afee-3c9c00f3bb43" containerName="nova-scheduler-scheduler" containerID="cri-o://84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d" gracePeriod=30 Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.491229 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.491555 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6e102c94-f461-4486-b5d9-a304b48eaad2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2" gracePeriod=30 Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.549807 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:08:16 crc kubenswrapper[5017]: I0129 08:08:16.550457 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="32079be6-0441-460f-b3fa-d05533ee59f5" containerName="nova-cell1-conductor-conductor" containerID="cri-o://45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869" gracePeriod=30 Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.024709 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.038508 5017 generic.go:334] "Generic (PLEG): container finished" podID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerID="282b0d36dfa465cecc6e292600dda239b9da90c42c0091e35a8c4e10eab4296c" exitCode=143 Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.038595 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89895d2a-1660-449b-8bf2-ea704cb93dd1","Type":"ContainerDied","Data":"282b0d36dfa465cecc6e292600dda239b9da90c42c0091e35a8c4e10eab4296c"} Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.047493 5017 generic.go:334] "Generic (PLEG): container finished" podID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerID="4365d3d9f73fb118fa6d5b7d6dd4f02f76ed3b0ef373e8fbec3379bc9fa32db8" exitCode=143 Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.047598 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"570006eb-aed7-44e3-89f0-61483bfa5fc3","Type":"ContainerDied","Data":"4365d3d9f73fb118fa6d5b7d6dd4f02f76ed3b0ef373e8fbec3379bc9fa32db8"} Jan 29 08:08:17 crc kubenswrapper[5017]: E0129 08:08:17.153216 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:08:17 crc kubenswrapper[5017]: E0129 08:08:17.155160 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:08:17 crc kubenswrapper[5017]: E0129 08:08:17.162846 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:08:17 crc kubenswrapper[5017]: E0129 08:08:17.162920 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="32079be6-0441-460f-b3fa-d05533ee59f5" containerName="nova-cell1-conductor-conductor" Jan 29 08:08:17 crc kubenswrapper[5017]: E0129 08:08:17.663428 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:08:17 crc kubenswrapper[5017]: E0129 08:08:17.666231 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:08:17 crc kubenswrapper[5017]: E0129 08:08:17.668146 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:08:17 crc kubenswrapper[5017]: E0129 08:08:17.668206 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="6aa9495b-d470-41ce-b861-c410fc4e8aaf" containerName="nova-cell0-conductor-conductor" Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.820387 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.930800 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmj6h\" (UniqueName: \"kubernetes.io/projected/6e102c94-f461-4486-b5d9-a304b48eaad2-kube-api-access-kmj6h\") pod \"6e102c94-f461-4486-b5d9-a304b48eaad2\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.932142 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-combined-ca-bundle\") pod \"6e102c94-f461-4486-b5d9-a304b48eaad2\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.932455 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-config-data\") pod \"6e102c94-f461-4486-b5d9-a304b48eaad2\" (UID: \"6e102c94-f461-4486-b5d9-a304b48eaad2\") " Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.951214 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e102c94-f461-4486-b5d9-a304b48eaad2-kube-api-access-kmj6h" (OuterVolumeSpecName: "kube-api-access-kmj6h") pod "6e102c94-f461-4486-b5d9-a304b48eaad2" (UID: "6e102c94-f461-4486-b5d9-a304b48eaad2"). InnerVolumeSpecName "kube-api-access-kmj6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.984611 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e102c94-f461-4486-b5d9-a304b48eaad2" (UID: "6e102c94-f461-4486-b5d9-a304b48eaad2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:17 crc kubenswrapper[5017]: I0129 08:08:17.985620 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-config-data" (OuterVolumeSpecName: "config-data") pod "6e102c94-f461-4486-b5d9-a304b48eaad2" (UID: "6e102c94-f461-4486-b5d9-a304b48eaad2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.034846 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.034909 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e102c94-f461-4486-b5d9-a304b48eaad2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.034922 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmj6h\" (UniqueName: \"kubernetes.io/projected/6e102c94-f461-4486-b5d9-a304b48eaad2-kube-api-access-kmj6h\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.064878 5017 generic.go:334] "Generic (PLEG): container finished" podID="6e102c94-f461-4486-b5d9-a304b48eaad2" containerID="7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2" exitCode=0 Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.064939 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e102c94-f461-4486-b5d9-a304b48eaad2","Type":"ContainerDied","Data":"7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2"} Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.064992 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e102c94-f461-4486-b5d9-a304b48eaad2","Type":"ContainerDied","Data":"fc079f775234845fdbfb31042ef04e5a6cc738548431de3444504ad35ef433b4"} Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.065017 5017 scope.go:117] "RemoveContainer" containerID="7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.065181 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.100808 5017 scope.go:117] "RemoveContainer" containerID="7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2" Jan 29 08:08:18 crc kubenswrapper[5017]: E0129 08:08:18.110723 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2\": container with ID starting with 7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2 not found: ID does not exist" containerID="7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.110802 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2"} err="failed to get container status \"7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2\": rpc error: code = NotFound desc = could not find container \"7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2\": container with ID starting with 7a02b75bf67f65f549a9f021ddcec2c267b304bed16acc56dac38ad783b3f5c2 not found: ID does not exist" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.119495 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.137037 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.151116 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:08:18 crc kubenswrapper[5017]: E0129 08:08:18.151778 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerName="extract-content" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.151808 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerName="extract-content" Jan 29 08:08:18 crc kubenswrapper[5017]: E0129 08:08:18.151827 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeb803e-6e7b-42f8-a146-7fbc986e4a80" containerName="init" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.151836 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeb803e-6e7b-42f8-a146-7fbc986e4a80" containerName="init" Jan 29 08:08:18 crc kubenswrapper[5017]: E0129 08:08:18.151849 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeb803e-6e7b-42f8-a146-7fbc986e4a80" containerName="dnsmasq-dns" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.151857 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeb803e-6e7b-42f8-a146-7fbc986e4a80" containerName="dnsmasq-dns" Jan 29 08:08:18 crc kubenswrapper[5017]: E0129 08:08:18.151881 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e102c94-f461-4486-b5d9-a304b48eaad2" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.151902 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e102c94-f461-4486-b5d9-a304b48eaad2" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 08:08:18 crc kubenswrapper[5017]: E0129 08:08:18.151912 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerName="extract-utilities" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.151919 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerName="extract-utilities" Jan 29 08:08:18 crc kubenswrapper[5017]: E0129 08:08:18.151935 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerName="registry-server" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.151944 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerName="registry-server" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.152244 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e102c94-f461-4486-b5d9-a304b48eaad2" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.152263 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeb803e-6e7b-42f8-a146-7fbc986e4a80" containerName="dnsmasq-dns" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.152276 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b68d00-f08d-4361-8f41-4d14d9fabe0a" containerName="registry-server" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.153213 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.155362 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.160598 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.239349 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451d181b-38b4-40f2-a648-d9b0df76fdc5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"451d181b-38b4-40f2-a648-d9b0df76fdc5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.239725 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451d181b-38b4-40f2-a648-d9b0df76fdc5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"451d181b-38b4-40f2-a648-d9b0df76fdc5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.239871 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlx47\" (UniqueName: \"kubernetes.io/projected/451d181b-38b4-40f2-a648-d9b0df76fdc5-kube-api-access-tlx47\") pod \"nova-cell1-novncproxy-0\" (UID: \"451d181b-38b4-40f2-a648-d9b0df76fdc5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.333239 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e102c94-f461-4486-b5d9-a304b48eaad2" path="/var/lib/kubelet/pods/6e102c94-f461-4486-b5d9-a304b48eaad2/volumes" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.344558 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlx47\" (UniqueName: \"kubernetes.io/projected/451d181b-38b4-40f2-a648-d9b0df76fdc5-kube-api-access-tlx47\") pod \"nova-cell1-novncproxy-0\" (UID: \"451d181b-38b4-40f2-a648-d9b0df76fdc5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.344679 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451d181b-38b4-40f2-a648-d9b0df76fdc5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"451d181b-38b4-40f2-a648-d9b0df76fdc5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.344714 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451d181b-38b4-40f2-a648-d9b0df76fdc5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"451d181b-38b4-40f2-a648-d9b0df76fdc5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.356353 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451d181b-38b4-40f2-a648-d9b0df76fdc5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"451d181b-38b4-40f2-a648-d9b0df76fdc5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.356768 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451d181b-38b4-40f2-a648-d9b0df76fdc5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"451d181b-38b4-40f2-a648-d9b0df76fdc5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.384066 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlx47\" (UniqueName: \"kubernetes.io/projected/451d181b-38b4-40f2-a648-d9b0df76fdc5-kube-api-access-tlx47\") pod \"nova-cell1-novncproxy-0\" (UID: \"451d181b-38b4-40f2-a648-d9b0df76fdc5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.560185 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.606928 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.768634 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-combined-ca-bundle\") pod \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.769143 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-config-data\") pod \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.769258 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv6db\" (UniqueName: \"kubernetes.io/projected/0bd61008-f300-4b3d-afee-3c9c00f3bb43-kube-api-access-zv6db\") pod \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\" (UID: \"0bd61008-f300-4b3d-afee-3c9c00f3bb43\") " Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.808321 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd61008-f300-4b3d-afee-3c9c00f3bb43-kube-api-access-zv6db" (OuterVolumeSpecName: "kube-api-access-zv6db") pod "0bd61008-f300-4b3d-afee-3c9c00f3bb43" (UID: "0bd61008-f300-4b3d-afee-3c9c00f3bb43"). InnerVolumeSpecName "kube-api-access-zv6db". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.873149 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bd61008-f300-4b3d-afee-3c9c00f3bb43" (UID: "0bd61008-f300-4b3d-afee-3c9c00f3bb43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.874729 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv6db\" (UniqueName: \"kubernetes.io/projected/0bd61008-f300-4b3d-afee-3c9c00f3bb43-kube-api-access-zv6db\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.874764 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.895402 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-config-data" (OuterVolumeSpecName: "config-data") pod "0bd61008-f300-4b3d-afee-3c9c00f3bb43" (UID: "0bd61008-f300-4b3d-afee-3c9c00f3bb43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:18 crc kubenswrapper[5017]: I0129 08:08:18.977369 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bd61008-f300-4b3d-afee-3c9c00f3bb43-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.080294 5017 generic.go:334] "Generic (PLEG): container finished" podID="0bd61008-f300-4b3d-afee-3c9c00f3bb43" containerID="84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d" exitCode=0 Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.080359 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0bd61008-f300-4b3d-afee-3c9c00f3bb43","Type":"ContainerDied","Data":"84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d"} Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.080449 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0bd61008-f300-4b3d-afee-3c9c00f3bb43","Type":"ContainerDied","Data":"cec999a3f2ea500e5fc42996a2fb478417b5cad3c073a3c0ca255438a22ec885"} Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.080505 5017 scope.go:117] "RemoveContainer" containerID="84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.080382 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.123722 5017 scope.go:117] "RemoveContainer" containerID="84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d" Jan 29 08:08:19 crc kubenswrapper[5017]: E0129 08:08:19.124405 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d\": container with ID starting with 84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d not found: ID does not exist" containerID="84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.124472 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d"} err="failed to get container status \"84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d\": rpc error: code = NotFound desc = could not find container \"84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d\": container with ID starting with 84c3a1d751bd88266e5d7628c6c3f95f08ae7f2c22d3a011b12bdd9731c34d7d not found: ID does not exist" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.129714 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.141237 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.152007 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.174915 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:08:19 crc kubenswrapper[5017]: E0129 08:08:19.175648 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd61008-f300-4b3d-afee-3c9c00f3bb43" containerName="nova-scheduler-scheduler" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.175675 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd61008-f300-4b3d-afee-3c9c00f3bb43" containerName="nova-scheduler-scheduler" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.175900 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd61008-f300-4b3d-afee-3c9c00f3bb43" containerName="nova-scheduler-scheduler" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.176845 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.191545 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.238488 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.285341 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.285481 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-config-data\") pod \"nova-scheduler-0\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.285600 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmd95\" (UniqueName: \"kubernetes.io/projected/98b35242-2511-4cfc-9e84-1ad56cae8e44-kube-api-access-cmd95\") pod \"nova-scheduler-0\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.387239 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmd95\" (UniqueName: \"kubernetes.io/projected/98b35242-2511-4cfc-9e84-1ad56cae8e44-kube-api-access-cmd95\") pod \"nova-scheduler-0\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.387345 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.387421 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-config-data\") pod \"nova-scheduler-0\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.395331 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.395782 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-config-data\") pod \"nova-scheduler-0\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.420824 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmd95\" (UniqueName: \"kubernetes.io/projected/98b35242-2511-4cfc-9e84-1ad56cae8e44-kube-api-access-cmd95\") pod \"nova-scheduler-0\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.502788 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.645461 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.70:8775/\": read tcp 10.217.0.2:34346->10.217.1.70:8775: read: connection reset by peer" Jan 29 08:08:19 crc kubenswrapper[5017]: I0129 08:08:19.646364 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.70:8775/\": read tcp 10.217.0.2:34352->10.217.1.70:8775: read: connection reset by peer" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.046286 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:08:20 crc kubenswrapper[5017]: W0129 08:08:20.069609 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b35242_2511_4cfc_9e84_1ad56cae8e44.slice/crio-9bb97df468372d172c9bbb51028c97e82b6a1deedd64e731c0126b6a90ee1e89 WatchSource:0}: Error finding container 9bb97df468372d172c9bbb51028c97e82b6a1deedd64e731c0126b6a90ee1e89: Status 404 returned error can't find the container with id 9bb97df468372d172c9bbb51028c97e82b6a1deedd64e731c0126b6a90ee1e89 Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.108031 5017 generic.go:334] "Generic (PLEG): container finished" podID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerID="dcd0b9896f21e9a2551116172d6a8327bc3b1ab3cee5858973770899221a43d8" exitCode=0 Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.109478 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"570006eb-aed7-44e3-89f0-61483bfa5fc3","Type":"ContainerDied","Data":"dcd0b9896f21e9a2551116172d6a8327bc3b1ab3cee5858973770899221a43d8"} Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.120051 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"451d181b-38b4-40f2-a648-d9b0df76fdc5","Type":"ContainerStarted","Data":"fca141491133306bde559781d5127ab613be93307809ab3701f1d8aae97900a2"} Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.120339 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"451d181b-38b4-40f2-a648-d9b0df76fdc5","Type":"ContainerStarted","Data":"335b809da002f1245969f7f658b1b835cb5cc4eefdfd8b311c9df344687e06a2"} Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.127980 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98b35242-2511-4cfc-9e84-1ad56cae8e44","Type":"ContainerStarted","Data":"9bb97df468372d172c9bbb51028c97e82b6a1deedd64e731c0126b6a90ee1e89"} Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.147228 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.147200251 podStartE2EDuration="2.147200251s" podCreationTimestamp="2026-01-29 08:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:20.13883977 +0000 UTC m=+5586.513287370" watchObservedRunningTime="2026-01-29 08:08:20.147200251 +0000 UTC m=+5586.521647851" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.149258 5017 generic.go:334] "Generic (PLEG): container finished" podID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerID="f2269f40906caac53176c413f7b85fd16c81dcd3c9ac2beae85ded2767b45b89" exitCode=0 Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.149396 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89895d2a-1660-449b-8bf2-ea704cb93dd1","Type":"ContainerDied","Data":"f2269f40906caac53176c413f7b85fd16c81dcd3c9ac2beae85ded2767b45b89"} Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.149661 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.330571 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cfds\" (UniqueName: \"kubernetes.io/projected/570006eb-aed7-44e3-89f0-61483bfa5fc3-kube-api-access-2cfds\") pod \"570006eb-aed7-44e3-89f0-61483bfa5fc3\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.330974 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-combined-ca-bundle\") pod \"570006eb-aed7-44e3-89f0-61483bfa5fc3\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.331099 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-config-data\") pod \"570006eb-aed7-44e3-89f0-61483bfa5fc3\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.331269 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/570006eb-aed7-44e3-89f0-61483bfa5fc3-logs\") pod \"570006eb-aed7-44e3-89f0-61483bfa5fc3\" (UID: \"570006eb-aed7-44e3-89f0-61483bfa5fc3\") " Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.332427 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/570006eb-aed7-44e3-89f0-61483bfa5fc3-logs" (OuterVolumeSpecName: "logs") pod "570006eb-aed7-44e3-89f0-61483bfa5fc3" (UID: "570006eb-aed7-44e3-89f0-61483bfa5fc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.336034 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd61008-f300-4b3d-afee-3c9c00f3bb43" path="/var/lib/kubelet/pods/0bd61008-f300-4b3d-afee-3c9c00f3bb43/volumes" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.355148 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570006eb-aed7-44e3-89f0-61483bfa5fc3-kube-api-access-2cfds" (OuterVolumeSpecName: "kube-api-access-2cfds") pod "570006eb-aed7-44e3-89f0-61483bfa5fc3" (UID: "570006eb-aed7-44e3-89f0-61483bfa5fc3"). InnerVolumeSpecName "kube-api-access-2cfds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.370042 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "570006eb-aed7-44e3-89f0-61483bfa5fc3" (UID: "570006eb-aed7-44e3-89f0-61483bfa5fc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.377386 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-config-data" (OuterVolumeSpecName: "config-data") pod "570006eb-aed7-44e3-89f0-61483bfa5fc3" (UID: "570006eb-aed7-44e3-89f0-61483bfa5fc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.433533 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/570006eb-aed7-44e3-89f0-61483bfa5fc3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.433575 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cfds\" (UniqueName: \"kubernetes.io/projected/570006eb-aed7-44e3-89f0-61483bfa5fc3-kube-api-access-2cfds\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.433588 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.433598 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570006eb-aed7-44e3-89f0-61483bfa5fc3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.458902 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.638508 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89895d2a-1660-449b-8bf2-ea704cb93dd1-logs" (OuterVolumeSpecName: "logs") pod "89895d2a-1660-449b-8bf2-ea704cb93dd1" (UID: "89895d2a-1660-449b-8bf2-ea704cb93dd1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.638024 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89895d2a-1660-449b-8bf2-ea704cb93dd1-logs\") pod \"89895d2a-1660-449b-8bf2-ea704cb93dd1\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.638783 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-config-data\") pod \"89895d2a-1660-449b-8bf2-ea704cb93dd1\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.639232 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbqkm\" (UniqueName: \"kubernetes.io/projected/89895d2a-1660-449b-8bf2-ea704cb93dd1-kube-api-access-mbqkm\") pod \"89895d2a-1660-449b-8bf2-ea704cb93dd1\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.639413 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-combined-ca-bundle\") pod \"89895d2a-1660-449b-8bf2-ea704cb93dd1\" (UID: \"89895d2a-1660-449b-8bf2-ea704cb93dd1\") " Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.640262 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89895d2a-1660-449b-8bf2-ea704cb93dd1-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.654421 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89895d2a-1660-449b-8bf2-ea704cb93dd1-kube-api-access-mbqkm" (OuterVolumeSpecName: "kube-api-access-mbqkm") pod "89895d2a-1660-449b-8bf2-ea704cb93dd1" (UID: "89895d2a-1660-449b-8bf2-ea704cb93dd1"). InnerVolumeSpecName "kube-api-access-mbqkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.716352 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-config-data" (OuterVolumeSpecName: "config-data") pod "89895d2a-1660-449b-8bf2-ea704cb93dd1" (UID: "89895d2a-1660-449b-8bf2-ea704cb93dd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.737235 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89895d2a-1660-449b-8bf2-ea704cb93dd1" (UID: "89895d2a-1660-449b-8bf2-ea704cb93dd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.742440 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.742576 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89895d2a-1660-449b-8bf2-ea704cb93dd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:20 crc kubenswrapper[5017]: I0129 08:08:20.742676 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbqkm\" (UniqueName: \"kubernetes.io/projected/89895d2a-1660-449b-8bf2-ea704cb93dd1-kube-api-access-mbqkm\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.162720 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89895d2a-1660-449b-8bf2-ea704cb93dd1","Type":"ContainerDied","Data":"c5afa3e6ceab2ae4266e354e7c7544c10e3bf5e5ed1050a254e6d53e6ddc52c1"} Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.163393 5017 scope.go:117] "RemoveContainer" containerID="f2269f40906caac53176c413f7b85fd16c81dcd3c9ac2beae85ded2767b45b89" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.162728 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.167059 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"570006eb-aed7-44e3-89f0-61483bfa5fc3","Type":"ContainerDied","Data":"c459eaf9c95dd5a8a0a1c621945ebad32cf4b222371aac8493c32443641680d2"} Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.167111 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.175971 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98b35242-2511-4cfc-9e84-1ad56cae8e44","Type":"ContainerStarted","Data":"77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc"} Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.201461 5017 scope.go:117] "RemoveContainer" containerID="282b0d36dfa465cecc6e292600dda239b9da90c42c0091e35a8c4e10eab4296c" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.210534 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.210498529 podStartE2EDuration="2.210498529s" podCreationTimestamp="2026-01-29 08:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:21.19760541 +0000 UTC m=+5587.572053020" watchObservedRunningTime="2026-01-29 08:08:21.210498529 +0000 UTC m=+5587.584946169" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.241842 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.266345 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.277845 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.288434 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.319914 5017 scope.go:117] "RemoveContainer" containerID="dcd0b9896f21e9a2551116172d6a8327bc3b1ab3cee5858973770899221a43d8" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.365140 5017 scope.go:117] "RemoveContainer" containerID="4365d3d9f73fb118fa6d5b7d6dd4f02f76ed3b0ef373e8fbec3379bc9fa32db8" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.370055 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:08:21 crc kubenswrapper[5017]: E0129 08:08:21.371397 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-metadata" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.371428 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-metadata" Jan 29 08:08:21 crc kubenswrapper[5017]: E0129 08:08:21.371455 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-log" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.371470 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-log" Jan 29 08:08:21 crc kubenswrapper[5017]: E0129 08:08:21.371536 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-log" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.371547 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-log" Jan 29 08:08:21 crc kubenswrapper[5017]: E0129 08:08:21.371564 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-api" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.371573 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-api" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.372068 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-log" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.372115 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-metadata" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.372137 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" containerName="nova-api-api" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.372162 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" containerName="nova-metadata-log" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.378471 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.400340 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.434694 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.437489 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.440377 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.446935 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.471154 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.481726 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86c87\" (UniqueName: \"kubernetes.io/projected/44ee44e9-325e-4aaa-9523-163177d2f47c-kube-api-access-86c87\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.481807 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44ee44e9-325e-4aaa-9523-163177d2f47c-logs\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.481835 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-config-data\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.481895 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.585350 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.585434 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.585527 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-logs\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.585615 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86c87\" (UniqueName: \"kubernetes.io/projected/44ee44e9-325e-4aaa-9523-163177d2f47c-kube-api-access-86c87\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.585687 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kb5\" (UniqueName: \"kubernetes.io/projected/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-kube-api-access-l2kb5\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.585757 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44ee44e9-325e-4aaa-9523-163177d2f47c-logs\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.585781 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-config-data\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.585800 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-config-data\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.588654 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44ee44e9-325e-4aaa-9523-163177d2f47c-logs\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.598852 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.599693 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-config-data\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.614687 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86c87\" (UniqueName: \"kubernetes.io/projected/44ee44e9-325e-4aaa-9523-163177d2f47c-kube-api-access-86c87\") pod \"nova-metadata-0\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.687819 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.689231 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-logs\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.689680 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-logs\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.690336 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kb5\" (UniqueName: \"kubernetes.io/projected/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-kube-api-access-l2kb5\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.690385 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-config-data\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.695336 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.696171 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-config-data\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.708178 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kb5\" (UniqueName: \"kubernetes.io/projected/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-kube-api-access-l2kb5\") pod \"nova-api-0\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " pod="openstack/nova-api-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.768133 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:08:21 crc kubenswrapper[5017]: I0129 08:08:21.782665 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.135035 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: E0129 08:08:22.175615 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:08:22 crc kubenswrapper[5017]: E0129 08:08:22.184714 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869 is running failed: container process not found" containerID="45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:08:22 crc kubenswrapper[5017]: E0129 08:08:22.185516 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869 is running failed: container process not found" containerID="45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:08:22 crc kubenswrapper[5017]: E0129 08:08:22.185566 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="32079be6-0441-460f-b3fa-d05533ee59f5" containerName="nova-cell1-conductor-conductor" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.209075 5017 generic.go:334] "Generic (PLEG): container finished" podID="6aa9495b-d470-41ce-b861-c410fc4e8aaf" containerID="d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985" exitCode=0 Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.209163 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6aa9495b-d470-41ce-b861-c410fc4e8aaf","Type":"ContainerDied","Data":"d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985"} Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.209214 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6aa9495b-d470-41ce-b861-c410fc4e8aaf","Type":"ContainerDied","Data":"88105743287daaeaf51858372699da35148395ea781beeacd8d1c1fe5e829d6a"} Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.209234 5017 scope.go:117] "RemoveContainer" containerID="d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.209356 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.220788 5017 generic.go:334] "Generic (PLEG): container finished" podID="32079be6-0441-460f-b3fa-d05533ee59f5" containerID="45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869" exitCode=0 Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.220892 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"32079be6-0441-460f-b3fa-d05533ee59f5","Type":"ContainerDied","Data":"45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869"} Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.317580 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqpkz\" (UniqueName: \"kubernetes.io/projected/6aa9495b-d470-41ce-b861-c410fc4e8aaf-kube-api-access-dqpkz\") pod \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.317675 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-combined-ca-bundle\") pod \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.317804 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-config-data\") pod \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\" (UID: \"6aa9495b-d470-41ce-b861-c410fc4e8aaf\") " Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.331319 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa9495b-d470-41ce-b861-c410fc4e8aaf-kube-api-access-dqpkz" (OuterVolumeSpecName: "kube-api-access-dqpkz") pod "6aa9495b-d470-41ce-b861-c410fc4e8aaf" (UID: "6aa9495b-d470-41ce-b861-c410fc4e8aaf"). InnerVolumeSpecName "kube-api-access-dqpkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.332795 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="570006eb-aed7-44e3-89f0-61483bfa5fc3" path="/var/lib/kubelet/pods/570006eb-aed7-44e3-89f0-61483bfa5fc3/volumes" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.334006 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89895d2a-1660-449b-8bf2-ea704cb93dd1" path="/var/lib/kubelet/pods/89895d2a-1660-449b-8bf2-ea704cb93dd1/volumes" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.336189 5017 scope.go:117] "RemoveContainer" containerID="d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985" Jan 29 08:08:22 crc kubenswrapper[5017]: E0129 08:08:22.338545 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985\": container with ID starting with d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985 not found: ID does not exist" containerID="d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.338602 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985"} err="failed to get container status \"d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985\": rpc error: code = NotFound desc = could not find container \"d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985\": container with ID starting with d74781fe0dbda69dd5aeefa4658fea02c1b713d2f990afdde736a5ec00217985 not found: ID does not exist" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.393807 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aa9495b-d470-41ce-b861-c410fc4e8aaf" (UID: "6aa9495b-d470-41ce-b861-c410fc4e8aaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.402466 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-config-data" (OuterVolumeSpecName: "config-data") pod "6aa9495b-d470-41ce-b861-c410fc4e8aaf" (UID: "6aa9495b-d470-41ce-b861-c410fc4e8aaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.420731 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqpkz\" (UniqueName: \"kubernetes.io/projected/6aa9495b-d470-41ce-b861-c410fc4e8aaf-kube-api-access-dqpkz\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.420773 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.420805 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aa9495b-d470-41ce-b861-c410fc4e8aaf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.424727 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.447008 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.611406 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.622080 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.632294 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:08:22 crc kubenswrapper[5017]: E0129 08:08:22.633819 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa9495b-d470-41ce-b861-c410fc4e8aaf" containerName="nova-cell0-conductor-conductor" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.633843 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa9495b-d470-41ce-b861-c410fc4e8aaf" containerName="nova-cell0-conductor-conductor" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.634089 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa9495b-d470-41ce-b861-c410fc4e8aaf" containerName="nova-cell0-conductor-conductor" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.634821 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.641431 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.646975 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.722435 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.827686 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-config-data\") pod \"32079be6-0441-460f-b3fa-d05533ee59f5\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.828144 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-combined-ca-bundle\") pod \"32079be6-0441-460f-b3fa-d05533ee59f5\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.828298 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr7pt\" (UniqueName: \"kubernetes.io/projected/32079be6-0441-460f-b3fa-d05533ee59f5-kube-api-access-hr7pt\") pod \"32079be6-0441-460f-b3fa-d05533ee59f5\" (UID: \"32079be6-0441-460f-b3fa-d05533ee59f5\") " Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.828798 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vv9h\" (UniqueName: \"kubernetes.io/projected/be5ab308-5352-4f70-8c87-7dece924618f-kube-api-access-8vv9h\") pod \"nova-cell0-conductor-0\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.828847 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.828888 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.833885 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32079be6-0441-460f-b3fa-d05533ee59f5-kube-api-access-hr7pt" (OuterVolumeSpecName: "kube-api-access-hr7pt") pod "32079be6-0441-460f-b3fa-d05533ee59f5" (UID: "32079be6-0441-460f-b3fa-d05533ee59f5"). InnerVolumeSpecName "kube-api-access-hr7pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.855551 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-config-data" (OuterVolumeSpecName: "config-data") pod "32079be6-0441-460f-b3fa-d05533ee59f5" (UID: "32079be6-0441-460f-b3fa-d05533ee59f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.872895 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32079be6-0441-460f-b3fa-d05533ee59f5" (UID: "32079be6-0441-460f-b3fa-d05533ee59f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.930948 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vv9h\" (UniqueName: \"kubernetes.io/projected/be5ab308-5352-4f70-8c87-7dece924618f-kube-api-access-8vv9h\") pod \"nova-cell0-conductor-0\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.931022 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.931061 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.931134 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.931146 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr7pt\" (UniqueName: \"kubernetes.io/projected/32079be6-0441-460f-b3fa-d05533ee59f5-kube-api-access-hr7pt\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.931160 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32079be6-0441-460f-b3fa-d05533ee59f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.941057 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.943407 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:22 crc kubenswrapper[5017]: I0129 08:08:22.953733 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vv9h\" (UniqueName: \"kubernetes.io/projected/be5ab308-5352-4f70-8c87-7dece924618f-kube-api-access-8vv9h\") pod \"nova-cell0-conductor-0\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.039798 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.258864 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.261315 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"32079be6-0441-460f-b3fa-d05533ee59f5","Type":"ContainerDied","Data":"23d3f649d484204e1ad6e9196e5a77554606d0e6cce3a967a05c2c3b7572fb61"} Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.261384 5017 scope.go:117] "RemoveContainer" containerID="45f280b96f2e8e8f89505d66ac245b618faa9b3234db33973195c9dc73c5b869" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.270682 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8fb7390-d44e-4c07-9eec-ee2d0856adc3","Type":"ContainerStarted","Data":"f9cb25d9f6edfcea1a4eff8a548393e1c011616a63c78f33d0ab0e8a39705a5d"} Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.270744 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8fb7390-d44e-4c07-9eec-ee2d0856adc3","Type":"ContainerStarted","Data":"74028403b788423eeb82aaef2e40e3f5c13304354441c1d7911f3c8239495d6d"} Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.270770 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8fb7390-d44e-4c07-9eec-ee2d0856adc3","Type":"ContainerStarted","Data":"38719233cfca48eee199abf5d8cf436793d0c062b97db05cf9391fa787fdbdae"} Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.277232 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44ee44e9-325e-4aaa-9523-163177d2f47c","Type":"ContainerStarted","Data":"f1ce74e3bc5840259a8cf6016b9ad7fdd995830cf20979001b9c4a01b2ae5abb"} Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.277290 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44ee44e9-325e-4aaa-9523-163177d2f47c","Type":"ContainerStarted","Data":"c59b78bdafc8f3cae07251dc61dd074534d7d3e9570a1035f1eb1181e2a04336"} Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.277304 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44ee44e9-325e-4aaa-9523-163177d2f47c","Type":"ContainerStarted","Data":"ba76f05c9b4d6762c30674fb761c6b606a20074b8d78fd41635e19966048bae2"} Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.313864 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.313819887 podStartE2EDuration="2.313819887s" podCreationTimestamp="2026-01-29 08:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:23.292327341 +0000 UTC m=+5589.666774951" watchObservedRunningTime="2026-01-29 08:08:23.313819887 +0000 UTC m=+5589.688267507" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.339062 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.339037082 podStartE2EDuration="2.339037082s" podCreationTimestamp="2026-01-29 08:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:23.32682783 +0000 UTC m=+5589.701275450" watchObservedRunningTime="2026-01-29 08:08:23.339037082 +0000 UTC m=+5589.713484712" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.357772 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.385703 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.398834 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:08:23 crc kubenswrapper[5017]: E0129 08:08:23.399429 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32079be6-0441-460f-b3fa-d05533ee59f5" containerName="nova-cell1-conductor-conductor" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.399453 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="32079be6-0441-460f-b3fa-d05533ee59f5" containerName="nova-cell1-conductor-conductor" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.399650 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="32079be6-0441-460f-b3fa-d05533ee59f5" containerName="nova-cell1-conductor-conductor" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.400479 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.407158 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.407884 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.552044 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxrd\" (UniqueName: \"kubernetes.io/projected/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-kube-api-access-wxxrd\") pod \"nova-cell1-conductor-0\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.552237 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.552377 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.560363 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.685075 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxrd\" (UniqueName: \"kubernetes.io/projected/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-kube-api-access-wxxrd\") pod \"nova-cell1-conductor-0\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.691983 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.692200 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.707440 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxrd\" (UniqueName: \"kubernetes.io/projected/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-kube-api-access-wxxrd\") pod \"nova-cell1-conductor-0\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.711843 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.715106 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.722366 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:08:23 crc kubenswrapper[5017]: I0129 08:08:23.727246 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:24 crc kubenswrapper[5017]: I0129 08:08:24.229812 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:08:24 crc kubenswrapper[5017]: I0129 08:08:24.293521 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"be5ab308-5352-4f70-8c87-7dece924618f","Type":"ContainerStarted","Data":"8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28"} Jan 29 08:08:24 crc kubenswrapper[5017]: I0129 08:08:24.293587 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"be5ab308-5352-4f70-8c87-7dece924618f","Type":"ContainerStarted","Data":"22c879e8e0eb4dd72afe1dc07bf62ccecaf8f6f3879e0aa765587004dd68e337"} Jan 29 08:08:24 crc kubenswrapper[5017]: I0129 08:08:24.294822 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:24 crc kubenswrapper[5017]: I0129 08:08:24.302995 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce","Type":"ContainerStarted","Data":"8d93a78d4f5fba93eb3e2ad0c646ace6692c8cc47157aca4ba78a0219e989d97"} Jan 29 08:08:24 crc kubenswrapper[5017]: I0129 08:08:24.356231 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32079be6-0441-460f-b3fa-d05533ee59f5" path="/var/lib/kubelet/pods/32079be6-0441-460f-b3fa-d05533ee59f5/volumes" Jan 29 08:08:24 crc kubenswrapper[5017]: I0129 08:08:24.357084 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa9495b-d470-41ce-b861-c410fc4e8aaf" path="/var/lib/kubelet/pods/6aa9495b-d470-41ce-b861-c410fc4e8aaf/volumes" Jan 29 08:08:24 crc kubenswrapper[5017]: I0129 08:08:24.369183 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.369157134 podStartE2EDuration="2.369157134s" podCreationTimestamp="2026-01-29 08:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:24.32566737 +0000 UTC m=+5590.700114980" watchObservedRunningTime="2026-01-29 08:08:24.369157134 +0000 UTC m=+5590.743604744" Jan 29 08:08:24 crc kubenswrapper[5017]: I0129 08:08:24.504235 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 08:08:25 crc kubenswrapper[5017]: I0129 08:08:25.322877 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce","Type":"ContainerStarted","Data":"4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b"} Jan 29 08:08:25 crc kubenswrapper[5017]: I0129 08:08:25.326412 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:25 crc kubenswrapper[5017]: I0129 08:08:25.373421 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.373388314 podStartE2EDuration="2.373388314s" podCreationTimestamp="2026-01-29 08:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:25.347607186 +0000 UTC m=+5591.722054796" watchObservedRunningTime="2026-01-29 08:08:25.373388314 +0000 UTC m=+5591.747835924" Jan 29 08:08:26 crc kubenswrapper[5017]: I0129 08:08:26.769776 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:08:26 crc kubenswrapper[5017]: I0129 08:08:26.770941 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:08:28 crc kubenswrapper[5017]: I0129 08:08:28.071809 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 08:08:28 crc kubenswrapper[5017]: I0129 08:08:28.560500 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:28 crc kubenswrapper[5017]: I0129 08:08:28.574241 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:29 crc kubenswrapper[5017]: I0129 08:08:29.380111 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:08:29 crc kubenswrapper[5017]: I0129 08:08:29.503434 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 08:08:29 crc kubenswrapper[5017]: I0129 08:08:29.542070 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 08:08:30 crc kubenswrapper[5017]: I0129 08:08:30.429007 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 08:08:31 crc kubenswrapper[5017]: I0129 08:08:31.769604 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:08:31 crc kubenswrapper[5017]: I0129 08:08:31.770045 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:08:31 crc kubenswrapper[5017]: I0129 08:08:31.783201 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:08:31 crc kubenswrapper[5017]: I0129 08:08:31.783266 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:08:32 crc kubenswrapper[5017]: I0129 08:08:32.934202 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.83:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:08:32 crc kubenswrapper[5017]: I0129 08:08:32.934214 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:08:32 crc kubenswrapper[5017]: I0129 08:08:32.934280 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:08:32 crc kubenswrapper[5017]: I0129 08:08:32.934410 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.83:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:08:33 crc kubenswrapper[5017]: I0129 08:08:33.764538 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.717246 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.721132 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.725128 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.735820 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.873403 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt4t2\" (UniqueName: \"kubernetes.io/projected/38d086d2-dbe2-4417-bf5e-0936e22b0eae-kube-api-access-bt4t2\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.873601 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-scripts\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.873913 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.874038 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.874102 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38d086d2-dbe2-4417-bf5e-0936e22b0eae-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.874141 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.977371 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.977474 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38d086d2-dbe2-4417-bf5e-0936e22b0eae-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.977522 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.977647 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38d086d2-dbe2-4417-bf5e-0936e22b0eae-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.977717 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt4t2\" (UniqueName: \"kubernetes.io/projected/38d086d2-dbe2-4417-bf5e-0936e22b0eae-kube-api-access-bt4t2\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.978290 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-scripts\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.978944 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.985556 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.985675 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.985829 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-scripts\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:36 crc kubenswrapper[5017]: I0129 08:08:36.986721 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:37 crc kubenswrapper[5017]: I0129 08:08:37.001236 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt4t2\" (UniqueName: \"kubernetes.io/projected/38d086d2-dbe2-4417-bf5e-0936e22b0eae-kube-api-access-bt4t2\") pod \"cinder-scheduler-0\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:37 crc kubenswrapper[5017]: I0129 08:08:37.049383 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:08:37 crc kubenswrapper[5017]: I0129 08:08:37.536329 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.240517 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.241328 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="80191ba1-9557-4150-b887-1262b7541638" containerName="cinder-api-log" containerID="cri-o://96475002857cb29bf1bce145fc27c2a6b5f6203507419ba17c55b71218a0ebf5" gracePeriod=30 Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.241450 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="80191ba1-9557-4150-b887-1262b7541638" containerName="cinder-api" containerID="cri-o://6695a6cf07f027bca5039813c42d405f98075e69255b726af24f9f0237b5b9fe" gracePeriod=30 Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.479930 5017 generic.go:334] "Generic (PLEG): container finished" podID="80191ba1-9557-4150-b887-1262b7541638" containerID="96475002857cb29bf1bce145fc27c2a6b5f6203507419ba17c55b71218a0ebf5" exitCode=143 Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.480013 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80191ba1-9557-4150-b887-1262b7541638","Type":"ContainerDied","Data":"96475002857cb29bf1bce145fc27c2a6b5f6203507419ba17c55b71218a0ebf5"} Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.481986 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38d086d2-dbe2-4417-bf5e-0936e22b0eae","Type":"ContainerStarted","Data":"9071a7bdbae37ddd54f3c88aaa03a3b41463502d43d56e42f80a665a0de16f87"} Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.482023 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38d086d2-dbe2-4417-bf5e-0936e22b0eae","Type":"ContainerStarted","Data":"e84d4644c24874f2af0d199a3adf81370b9ca2f978ce7cee96ea1e8d2ec43b6e"} Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.862591 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.868761 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.872214 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 29 08:08:38 crc kubenswrapper[5017]: I0129 08:08:38.879169 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.031334 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.031974 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-run\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032021 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032054 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032095 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlpvh\" (UniqueName: \"kubernetes.io/projected/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-kube-api-access-qlpvh\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032136 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032201 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032243 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032266 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032345 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032394 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032426 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032477 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032509 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032534 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.032588 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.134767 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.134830 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.134852 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.134900 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.134932 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.134947 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.134990 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.135011 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.135030 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.135064 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.135101 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.135128 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-run\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.135152 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.135169 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.135193 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlpvh\" (UniqueName: \"kubernetes.io/projected/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-kube-api-access-qlpvh\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.135210 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136144 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136223 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136259 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136160 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136327 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136366 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-run\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136405 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136466 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136515 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.136575 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.142120 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.144853 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.145639 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.147255 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.154007 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.156180 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlpvh\" (UniqueName: \"kubernetes.io/projected/17e2c7f2-bf28-4d9c-a65a-6da99c84034b-kube-api-access-qlpvh\") pod \"cinder-volume-volume1-0\" (UID: \"17e2c7f2-bf28-4d9c-a65a-6da99c84034b\") " pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.215039 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.501931 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38d086d2-dbe2-4417-bf5e-0936e22b0eae","Type":"ContainerStarted","Data":"9b22d3a61f7e70031f333fa6fa11e1361c37f9d4781727e10e56ee70896d55cc"} Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.600464 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.600429986 podStartE2EDuration="3.600429986s" podCreationTimestamp="2026-01-29 08:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:39.547644009 +0000 UTC m=+5605.922091620" watchObservedRunningTime="2026-01-29 08:08:39.600429986 +0000 UTC m=+5605.974877596" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.606157 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.608151 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.610746 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.629824 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.683619 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-scripts\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.683667 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05271cbb-1748-4309-9e77-023689c72e35-ceph\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.683691 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.683983 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-etc-nvme\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.684029 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.684064 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.684095 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-lib-modules\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.684155 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-sys\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.684362 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmgmg\" (UniqueName: \"kubernetes.io/projected/05271cbb-1748-4309-9e77-023689c72e35-kube-api-access-cmgmg\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.684588 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-dev\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.684630 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.684781 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.685032 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-config-data-custom\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.685084 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.685312 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-run\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.685395 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-config-data\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.786727 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-scripts\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.786797 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.786824 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05271cbb-1748-4309-9e77-023689c72e35-ceph\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.786894 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-etc-nvme\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.786923 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.786945 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.786995 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-lib-modules\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787022 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-sys\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787054 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmgmg\" (UniqueName: \"kubernetes.io/projected/05271cbb-1748-4309-9e77-023689c72e35-kube-api-access-cmgmg\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787090 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-dev\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787111 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787134 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787179 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787200 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-config-data-custom\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787235 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-run\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787261 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-config-data\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.787431 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-sys\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.791216 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-etc-nvme\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.791324 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.796623 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05271cbb-1748-4309-9e77-023689c72e35-ceph\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.796788 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.797227 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-dev\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.801576 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.801693 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-run\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.801735 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.803578 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.806546 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-lib-modules\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.809300 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/05271cbb-1748-4309-9e77-023689c72e35-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.810837 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-config-data-custom\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.813929 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-config-data\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.833844 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05271cbb-1748-4309-9e77-023689c72e35-scripts\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.838657 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmgmg\" (UniqueName: \"kubernetes.io/projected/05271cbb-1748-4309-9e77-023689c72e35-kube-api-access-cmgmg\") pod \"cinder-backup-0\" (UID: \"05271cbb-1748-4309-9e77-023689c72e35\") " pod="openstack/cinder-backup-0" Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.889279 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 29 08:08:39 crc kubenswrapper[5017]: W0129 08:08:39.907410 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17e2c7f2_bf28_4d9c_a65a_6da99c84034b.slice/crio-6913cb960b621f81d6319d4dade14b6e9c05132664c289c0860b372d676d36ae WatchSource:0}: Error finding container 6913cb960b621f81d6319d4dade14b6e9c05132664c289c0860b372d676d36ae: Status 404 returned error can't find the container with id 6913cb960b621f81d6319d4dade14b6e9c05132664c289c0860b372d676d36ae Jan 29 08:08:39 crc kubenswrapper[5017]: I0129 08:08:39.950215 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 29 08:08:40 crc kubenswrapper[5017]: I0129 08:08:40.511756 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"17e2c7f2-bf28-4d9c-a65a-6da99c84034b","Type":"ContainerStarted","Data":"6913cb960b621f81d6319d4dade14b6e9c05132664c289c0860b372d676d36ae"} Jan 29 08:08:40 crc kubenswrapper[5017]: I0129 08:08:40.626045 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.405576 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="80191ba1-9557-4150-b887-1262b7541638" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.79:8776/healthcheck\": read tcp 10.217.0.2:54702->10.217.1.79:8776: read: connection reset by peer" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.530137 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"17e2c7f2-bf28-4d9c-a65a-6da99c84034b","Type":"ContainerStarted","Data":"e4a88286d358f0e411581c93d2fa15d5d4ab3602e59b7b6813e9119562739091"} Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.530199 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"17e2c7f2-bf28-4d9c-a65a-6da99c84034b","Type":"ContainerStarted","Data":"a8cf04baca9ede86fe0e8ac7085a2c66100c8d3263829888aeff8b1be302050f"} Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.532727 5017 generic.go:334] "Generic (PLEG): container finished" podID="80191ba1-9557-4150-b887-1262b7541638" containerID="6695a6cf07f027bca5039813c42d405f98075e69255b726af24f9f0237b5b9fe" exitCode=0 Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.532786 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80191ba1-9557-4150-b887-1262b7541638","Type":"ContainerDied","Data":"6695a6cf07f027bca5039813c42d405f98075e69255b726af24f9f0237b5b9fe"} Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.535657 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"05271cbb-1748-4309-9e77-023689c72e35","Type":"ContainerStarted","Data":"8838d3a299bfced71715360273d230fc298fe3b7bc9bc6081fd12746243259ac"} Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.568337 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.764037722 podStartE2EDuration="3.568290822s" podCreationTimestamp="2026-01-29 08:08:38 +0000 UTC" firstStartedPulling="2026-01-29 08:08:39.909516587 +0000 UTC m=+5606.283964197" lastFinishedPulling="2026-01-29 08:08:40.713769677 +0000 UTC m=+5607.088217297" observedRunningTime="2026-01-29 08:08:41.559521442 +0000 UTC m=+5607.933969052" watchObservedRunningTime="2026-01-29 08:08:41.568290822 +0000 UTC m=+5607.942738432" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.778149 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.778749 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.787868 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.788080 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.789037 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.790380 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.793144 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.819365 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:08:41 crc kubenswrapper[5017]: I0129 08:08:41.980106 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.055884 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.149038 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80191ba1-9557-4150-b887-1262b7541638-etc-machine-id\") pod \"80191ba1-9557-4150-b887-1262b7541638\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.149688 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-scripts\") pod \"80191ba1-9557-4150-b887-1262b7541638\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.149726 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data\") pod \"80191ba1-9557-4150-b887-1262b7541638\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.149388 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80191ba1-9557-4150-b887-1262b7541638-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "80191ba1-9557-4150-b887-1262b7541638" (UID: "80191ba1-9557-4150-b887-1262b7541638"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.149992 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv56n\" (UniqueName: \"kubernetes.io/projected/80191ba1-9557-4150-b887-1262b7541638-kube-api-access-tv56n\") pod \"80191ba1-9557-4150-b887-1262b7541638\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.150064 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-combined-ca-bundle\") pod \"80191ba1-9557-4150-b887-1262b7541638\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.150094 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80191ba1-9557-4150-b887-1262b7541638-logs\") pod \"80191ba1-9557-4150-b887-1262b7541638\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.150201 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data-custom\") pod \"80191ba1-9557-4150-b887-1262b7541638\" (UID: \"80191ba1-9557-4150-b887-1262b7541638\") " Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.150868 5017 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80191ba1-9557-4150-b887-1262b7541638-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.151950 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80191ba1-9557-4150-b887-1262b7541638-logs" (OuterVolumeSpecName: "logs") pod "80191ba1-9557-4150-b887-1262b7541638" (UID: "80191ba1-9557-4150-b887-1262b7541638"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.160112 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "80191ba1-9557-4150-b887-1262b7541638" (UID: "80191ba1-9557-4150-b887-1262b7541638"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.161804 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80191ba1-9557-4150-b887-1262b7541638-kube-api-access-tv56n" (OuterVolumeSpecName: "kube-api-access-tv56n") pod "80191ba1-9557-4150-b887-1262b7541638" (UID: "80191ba1-9557-4150-b887-1262b7541638"). InnerVolumeSpecName "kube-api-access-tv56n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.162054 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-scripts" (OuterVolumeSpecName: "scripts") pod "80191ba1-9557-4150-b887-1262b7541638" (UID: "80191ba1-9557-4150-b887-1262b7541638"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.252986 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.253021 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv56n\" (UniqueName: \"kubernetes.io/projected/80191ba1-9557-4150-b887-1262b7541638-kube-api-access-tv56n\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.253037 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80191ba1-9557-4150-b887-1262b7541638-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.253045 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.275161 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80191ba1-9557-4150-b887-1262b7541638" (UID: "80191ba1-9557-4150-b887-1262b7541638"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.298442 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data" (OuterVolumeSpecName: "config-data") pod "80191ba1-9557-4150-b887-1262b7541638" (UID: "80191ba1-9557-4150-b887-1262b7541638"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.355633 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.355674 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80191ba1-9557-4150-b887-1262b7541638-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.547899 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80191ba1-9557-4150-b887-1262b7541638","Type":"ContainerDied","Data":"c84a182c6241313bf2726b2ede775aba9f3ac006a4bd0f414607a3a6e78ca216"} Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.548028 5017 scope.go:117] "RemoveContainer" containerID="6695a6cf07f027bca5039813c42d405f98075e69255b726af24f9f0237b5b9fe" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.549857 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.550494 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"05271cbb-1748-4309-9e77-023689c72e35","Type":"ContainerStarted","Data":"7f0fa89fb2b1a77bac8516a5386c3ae6541ab10f4d834478fe0230c842ba7df3"} Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.550553 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"05271cbb-1748-4309-9e77-023689c72e35","Type":"ContainerStarted","Data":"dcbe4aaea485604b430b3f71866a097643037fbea6c4a72a91e2ce5d6f375465"} Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.551540 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.581346 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.597635 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.632045 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.655035 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.655030 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.763966621 podStartE2EDuration="3.654996663s" podCreationTimestamp="2026-01-29 08:08:39 +0000 UTC" firstStartedPulling="2026-01-29 08:08:40.709671298 +0000 UTC m=+5607.084118918" lastFinishedPulling="2026-01-29 08:08:41.60070135 +0000 UTC m=+5607.975148960" observedRunningTime="2026-01-29 08:08:42.608774413 +0000 UTC m=+5608.983222023" watchObservedRunningTime="2026-01-29 08:08:42.654996663 +0000 UTC m=+5609.029444273" Jan 29 08:08:42 crc kubenswrapper[5017]: E0129 08:08:42.655571 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80191ba1-9557-4150-b887-1262b7541638" containerName="cinder-api" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.655590 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="80191ba1-9557-4150-b887-1262b7541638" containerName="cinder-api" Jan 29 08:08:42 crc kubenswrapper[5017]: E0129 08:08:42.655608 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80191ba1-9557-4150-b887-1262b7541638" containerName="cinder-api-log" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.655614 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="80191ba1-9557-4150-b887-1262b7541638" containerName="cinder-api-log" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.655812 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="80191ba1-9557-4150-b887-1262b7541638" containerName="cinder-api" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.655836 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="80191ba1-9557-4150-b887-1262b7541638" containerName="cinder-api-log" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.656949 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.659971 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.665774 5017 scope.go:117] "RemoveContainer" containerID="96475002857cb29bf1bce145fc27c2a6b5f6203507419ba17c55b71218a0ebf5" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.727484 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.764383 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.764443 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-logs\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.764495 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swbj8\" (UniqueName: \"kubernetes.io/projected/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-kube-api-access-swbj8\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.764542 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-config-data\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.764576 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-config-data-custom\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.764607 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.764645 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-scripts\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.867598 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-config-data-custom\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.868120 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-config-data\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.868169 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.868227 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-scripts\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.868298 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.868333 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-logs\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.868376 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swbj8\" (UniqueName: \"kubernetes.io/projected/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-kube-api-access-swbj8\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.872390 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.872784 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-logs\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.878939 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.880768 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-config-data\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.888623 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-config-data-custom\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.892747 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-scripts\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:42 crc kubenswrapper[5017]: I0129 08:08:42.896633 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swbj8\" (UniqueName: \"kubernetes.io/projected/bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb-kube-api-access-swbj8\") pod \"cinder-api-0\" (UID: \"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb\") " pod="openstack/cinder-api-0" Jan 29 08:08:43 crc kubenswrapper[5017]: I0129 08:08:43.019754 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:08:43 crc kubenswrapper[5017]: I0129 08:08:43.575709 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:08:44 crc kubenswrapper[5017]: I0129 08:08:44.218469 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:44 crc kubenswrapper[5017]: I0129 08:08:44.339263 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80191ba1-9557-4150-b887-1262b7541638" path="/var/lib/kubelet/pods/80191ba1-9557-4150-b887-1262b7541638/volumes" Jan 29 08:08:44 crc kubenswrapper[5017]: I0129 08:08:44.616183 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb","Type":"ContainerStarted","Data":"0c748e4b194e8bea5847a77d302a85476bbc17a1dc7019a001d0fc36936354bd"} Jan 29 08:08:44 crc kubenswrapper[5017]: I0129 08:08:44.616246 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb","Type":"ContainerStarted","Data":"f0eddb05e68674a65b23e3839977041b37368b6054c7d5ec3ebe8327e6be6d82"} Jan 29 08:08:44 crc kubenswrapper[5017]: I0129 08:08:44.951046 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 29 08:08:45 crc kubenswrapper[5017]: I0129 08:08:45.629107 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb","Type":"ContainerStarted","Data":"f82c3157920457e13e6d2ad30d208c4a5738d6897d77dabc252f9417f98ec368"} Jan 29 08:08:45 crc kubenswrapper[5017]: I0129 08:08:45.629631 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 08:08:45 crc kubenswrapper[5017]: I0129 08:08:45.646138 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.646119085 podStartE2EDuration="3.646119085s" podCreationTimestamp="2026-01-29 08:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:45.644914967 +0000 UTC m=+5612.019362577" watchObservedRunningTime="2026-01-29 08:08:45.646119085 +0000 UTC m=+5612.020566695" Jan 29 08:08:47 crc kubenswrapper[5017]: I0129 08:08:47.272754 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 08:08:47 crc kubenswrapper[5017]: I0129 08:08:47.343675 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:08:47 crc kubenswrapper[5017]: I0129 08:08:47.665050 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerName="probe" containerID="cri-o://9b22d3a61f7e70031f333fa6fa11e1361c37f9d4781727e10e56ee70896d55cc" gracePeriod=30 Jan 29 08:08:47 crc kubenswrapper[5017]: I0129 08:08:47.665851 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerName="cinder-scheduler" containerID="cri-o://9071a7bdbae37ddd54f3c88aaa03a3b41463502d43d56e42f80a665a0de16f87" gracePeriod=30 Jan 29 08:08:48 crc kubenswrapper[5017]: I0129 08:08:48.709839 5017 generic.go:334] "Generic (PLEG): container finished" podID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerID="9b22d3a61f7e70031f333fa6fa11e1361c37f9d4781727e10e56ee70896d55cc" exitCode=0 Jan 29 08:08:48 crc kubenswrapper[5017]: I0129 08:08:48.710217 5017 generic.go:334] "Generic (PLEG): container finished" podID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerID="9071a7bdbae37ddd54f3c88aaa03a3b41463502d43d56e42f80a665a0de16f87" exitCode=0 Jan 29 08:08:48 crc kubenswrapper[5017]: I0129 08:08:48.709891 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38d086d2-dbe2-4417-bf5e-0936e22b0eae","Type":"ContainerDied","Data":"9b22d3a61f7e70031f333fa6fa11e1361c37f9d4781727e10e56ee70896d55cc"} Jan 29 08:08:48 crc kubenswrapper[5017]: I0129 08:08:48.710257 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38d086d2-dbe2-4417-bf5e-0936e22b0eae","Type":"ContainerDied","Data":"9071a7bdbae37ddd54f3c88aaa03a3b41463502d43d56e42f80a665a0de16f87"} Jan 29 08:08:48 crc kubenswrapper[5017]: I0129 08:08:48.952416 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.042320 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38d086d2-dbe2-4417-bf5e-0936e22b0eae-etc-machine-id\") pod \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.042443 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt4t2\" (UniqueName: \"kubernetes.io/projected/38d086d2-dbe2-4417-bf5e-0936e22b0eae-kube-api-access-bt4t2\") pod \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.042453 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d086d2-dbe2-4417-bf5e-0936e22b0eae-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "38d086d2-dbe2-4417-bf5e-0936e22b0eae" (UID: "38d086d2-dbe2-4417-bf5e-0936e22b0eae"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.042563 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data\") pod \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.042685 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data-custom\") pod \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.042730 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-combined-ca-bundle\") pod \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.042791 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-scripts\") pod \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\" (UID: \"38d086d2-dbe2-4417-bf5e-0936e22b0eae\") " Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.043197 5017 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38d086d2-dbe2-4417-bf5e-0936e22b0eae-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.049362 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d086d2-dbe2-4417-bf5e-0936e22b0eae-kube-api-access-bt4t2" (OuterVolumeSpecName: "kube-api-access-bt4t2") pod "38d086d2-dbe2-4417-bf5e-0936e22b0eae" (UID: "38d086d2-dbe2-4417-bf5e-0936e22b0eae"). InnerVolumeSpecName "kube-api-access-bt4t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.049383 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "38d086d2-dbe2-4417-bf5e-0936e22b0eae" (UID: "38d086d2-dbe2-4417-bf5e-0936e22b0eae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.049482 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-scripts" (OuterVolumeSpecName: "scripts") pod "38d086d2-dbe2-4417-bf5e-0936e22b0eae" (UID: "38d086d2-dbe2-4417-bf5e-0936e22b0eae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.099045 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38d086d2-dbe2-4417-bf5e-0936e22b0eae" (UID: "38d086d2-dbe2-4417-bf5e-0936e22b0eae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.145433 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.145473 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.145486 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.145498 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt4t2\" (UniqueName: \"kubernetes.io/projected/38d086d2-dbe2-4417-bf5e-0936e22b0eae-kube-api-access-bt4t2\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.176989 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data" (OuterVolumeSpecName: "config-data") pod "38d086d2-dbe2-4417-bf5e-0936e22b0eae" (UID: "38d086d2-dbe2-4417-bf5e-0936e22b0eae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.247086 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38d086d2-dbe2-4417-bf5e-0936e22b0eae-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.470880 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.723068 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38d086d2-dbe2-4417-bf5e-0936e22b0eae","Type":"ContainerDied","Data":"e84d4644c24874f2af0d199a3adf81370b9ca2f978ce7cee96ea1e8d2ec43b6e"} Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.723143 5017 scope.go:117] "RemoveContainer" containerID="9b22d3a61f7e70031f333fa6fa11e1361c37f9d4781727e10e56ee70896d55cc" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.723301 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.766682 5017 scope.go:117] "RemoveContainer" containerID="9071a7bdbae37ddd54f3c88aaa03a3b41463502d43d56e42f80a665a0de16f87" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.780944 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.805132 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.825923 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:08:49 crc kubenswrapper[5017]: E0129 08:08:49.826443 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerName="probe" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.826476 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerName="probe" Jan 29 08:08:49 crc kubenswrapper[5017]: E0129 08:08:49.826522 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerName="cinder-scheduler" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.826533 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerName="cinder-scheduler" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.826773 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerName="cinder-scheduler" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.826817 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" containerName="probe" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.828011 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.828139 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.831365 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.965753 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.965928 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/acef4159-bdb8-462c-92ea-e663e9ab5c0d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.966088 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4d8g\" (UniqueName: \"kubernetes.io/projected/acef4159-bdb8-462c-92ea-e663e9ab5c0d-kube-api-access-s4d8g\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.966161 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.966346 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-scripts\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:49 crc kubenswrapper[5017]: I0129 08:08:49.966507 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-config-data\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.068398 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-scripts\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.068467 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-config-data\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.068606 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.068669 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/acef4159-bdb8-462c-92ea-e663e9ab5c0d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.068706 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4d8g\" (UniqueName: \"kubernetes.io/projected/acef4159-bdb8-462c-92ea-e663e9ab5c0d-kube-api-access-s4d8g\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.068734 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.068848 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/acef4159-bdb8-462c-92ea-e663e9ab5c0d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.074516 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-scripts\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.074908 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.075034 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-config-data\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.089988 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acef4159-bdb8-462c-92ea-e663e9ab5c0d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.090055 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4d8g\" (UniqueName: \"kubernetes.io/projected/acef4159-bdb8-462c-92ea-e663e9ab5c0d-kube-api-access-s4d8g\") pod \"cinder-scheduler-0\" (UID: \"acef4159-bdb8-462c-92ea-e663e9ab5c0d\") " pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.165348 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.190343 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.334220 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d086d2-dbe2-4417-bf5e-0936e22b0eae" path="/var/lib/kubelet/pods/38d086d2-dbe2-4417-bf5e-0936e22b0eae/volumes" Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.696842 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:08:50 crc kubenswrapper[5017]: I0129 08:08:50.743742 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"acef4159-bdb8-462c-92ea-e663e9ab5c0d","Type":"ContainerStarted","Data":"d001734599557dff88f24808eef7b0d1843294169fc9c87a7089dc2af0ae7744"} Jan 29 08:08:51 crc kubenswrapper[5017]: I0129 08:08:51.774806 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"acef4159-bdb8-462c-92ea-e663e9ab5c0d","Type":"ContainerStarted","Data":"e37b81cacbe9bf6d4ca645ddd0351b5802316b316e9e2550a9e9a679f4ff5dd9"} Jan 29 08:08:52 crc kubenswrapper[5017]: I0129 08:08:52.794332 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"acef4159-bdb8-462c-92ea-e663e9ab5c0d","Type":"ContainerStarted","Data":"7aabb0058bb03917532ab1c8ff4ffca8f267672c027e1a1eceaaeaae1edf92d4"} Jan 29 08:08:52 crc kubenswrapper[5017]: I0129 08:08:52.829682 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.829650283 podStartE2EDuration="3.829650283s" podCreationTimestamp="2026-01-29 08:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:52.821595589 +0000 UTC m=+5619.196043199" watchObservedRunningTime="2026-01-29 08:08:52.829650283 +0000 UTC m=+5619.204097893" Jan 29 08:08:55 crc kubenswrapper[5017]: I0129 08:08:55.112047 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 08:08:55 crc kubenswrapper[5017]: I0129 08:08:55.166205 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 08:08:56 crc kubenswrapper[5017]: I0129 08:08:56.539770 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:08:56 crc kubenswrapper[5017]: I0129 08:08:56.540334 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:09:00 crc kubenswrapper[5017]: I0129 08:09:00.366147 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 08:09:26 crc kubenswrapper[5017]: I0129 08:09:26.539659 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:09:26 crc kubenswrapper[5017]: I0129 08:09:26.540591 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:09:56 crc kubenswrapper[5017]: I0129 08:09:56.539599 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:09:56 crc kubenswrapper[5017]: I0129 08:09:56.540341 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:09:56 crc kubenswrapper[5017]: I0129 08:09:56.540395 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:09:56 crc kubenswrapper[5017]: I0129 08:09:56.541509 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:09:56 crc kubenswrapper[5017]: I0129 08:09:56.541591 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" gracePeriod=600 Jan 29 08:09:56 crc kubenswrapper[5017]: E0129 08:09:56.670420 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:09:57 crc kubenswrapper[5017]: I0129 08:09:57.483938 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" exitCode=0 Jan 29 08:09:57 crc kubenswrapper[5017]: I0129 08:09:57.484000 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645"} Jan 29 08:09:57 crc kubenswrapper[5017]: I0129 08:09:57.484490 5017 scope.go:117] "RemoveContainer" containerID="654247425c9be1ad39bf4a420f5d59cd286d394ddb9732f299ef1d927e039684" Jan 29 08:09:57 crc kubenswrapper[5017]: I0129 08:09:57.485505 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:09:57 crc kubenswrapper[5017]: E0129 08:09:57.485817 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:10:09 crc kubenswrapper[5017]: I0129 08:10:09.317119 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:10:09 crc kubenswrapper[5017]: E0129 08:10:09.318129 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:10:23 crc kubenswrapper[5017]: I0129 08:10:23.056126 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-87eb-account-create-update-tkbkw"] Jan 29 08:10:23 crc kubenswrapper[5017]: I0129 08:10:23.065977 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6ndsf"] Jan 29 08:10:23 crc kubenswrapper[5017]: I0129 08:10:23.080680 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-87eb-account-create-update-tkbkw"] Jan 29 08:10:23 crc kubenswrapper[5017]: I0129 08:10:23.093677 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6ndsf"] Jan 29 08:10:24 crc kubenswrapper[5017]: I0129 08:10:24.317849 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:10:24 crc kubenswrapper[5017]: E0129 08:10:24.318240 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:10:24 crc kubenswrapper[5017]: I0129 08:10:24.334733 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c49b28c-c970-454a-b44b-bce67b8315aa" path="/var/lib/kubelet/pods/3c49b28c-c970-454a-b44b-bce67b8315aa/volumes" Jan 29 08:10:24 crc kubenswrapper[5017]: I0129 08:10:24.335530 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84b8b13-ac28-4baf-aeae-6e977d8b2654" path="/var/lib/kubelet/pods/a84b8b13-ac28-4baf-aeae-6e977d8b2654/volumes" Jan 29 08:10:24 crc kubenswrapper[5017]: I0129 08:10:24.550283 5017 scope.go:117] "RemoveContainer" containerID="38d8048df02847d1d4200651c39c86e12a0550f6fd5637f63a9942d76824b6d4" Jan 29 08:10:24 crc kubenswrapper[5017]: I0129 08:10:24.583303 5017 scope.go:117] "RemoveContainer" containerID="210dcdb05f65c0dc4340b1405c5e856656d2944d4ed9fb4c23b0eb7b5f135499" Jan 29 08:10:29 crc kubenswrapper[5017]: I0129 08:10:29.044293 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gdn7c"] Jan 29 08:10:29 crc kubenswrapper[5017]: I0129 08:10:29.054470 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gdn7c"] Jan 29 08:10:30 crc kubenswrapper[5017]: I0129 08:10:30.328477 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b40d75-4d17-4903-9d35-5f4ccb411b25" path="/var/lib/kubelet/pods/e6b40d75-4d17-4903-9d35-5f4ccb411b25/volumes" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.217365 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-w64dv"] Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.221174 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.223699 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4mx2z" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.224599 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.228327 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5mc4s"] Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.235509 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.241926 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w64dv"] Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.256660 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5mc4s"] Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.271772 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-etc-ovs\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.271886 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-var-lib\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.271912 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-var-log\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.271932 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstsf\" (UniqueName: \"kubernetes.io/projected/2b894593-0963-4476-8329-daea9c22707a-kube-api-access-bstsf\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.272011 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1000feb0-a866-42c2-974e-cd95329589e2-scripts\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.272040 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg8mg\" (UniqueName: \"kubernetes.io/projected/1000feb0-a866-42c2-974e-cd95329589e2-kube-api-access-cg8mg\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.272087 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1000feb0-a866-42c2-974e-cd95329589e2-var-log-ovn\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.272132 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-var-run\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.272164 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b894593-0963-4476-8329-daea9c22707a-scripts\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.272192 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1000feb0-a866-42c2-974e-cd95329589e2-var-run\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.272221 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1000feb0-a866-42c2-974e-cd95329589e2-var-run-ovn\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.317282 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:10:39 crc kubenswrapper[5017]: E0129 08:10:39.317574 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.374521 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1000feb0-a866-42c2-974e-cd95329589e2-var-run-ovn\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.374891 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-etc-ovs\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.375052 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-var-lib\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.375170 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-var-log\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.375260 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstsf\" (UniqueName: \"kubernetes.io/projected/2b894593-0963-4476-8329-daea9c22707a-kube-api-access-bstsf\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.375410 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1000feb0-a866-42c2-974e-cd95329589e2-scripts\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.375548 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg8mg\" (UniqueName: \"kubernetes.io/projected/1000feb0-a866-42c2-974e-cd95329589e2-kube-api-access-cg8mg\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.375671 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1000feb0-a866-42c2-974e-cd95329589e2-var-log-ovn\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.375822 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-var-run\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.375930 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b894593-0963-4476-8329-daea9c22707a-scripts\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.376108 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1000feb0-a866-42c2-974e-cd95329589e2-var-run\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.376673 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1000feb0-a866-42c2-974e-cd95329589e2-var-run\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.376913 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1000feb0-a866-42c2-974e-cd95329589e2-var-run-ovn\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.377059 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-etc-ovs\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.377245 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1000feb0-a866-42c2-974e-cd95329589e2-var-log-ovn\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.377269 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-var-log\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.377322 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-var-lib\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.377935 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b894593-0963-4476-8329-daea9c22707a-var-run\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.379456 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1000feb0-a866-42c2-974e-cd95329589e2-scripts\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.379758 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b894593-0963-4476-8329-daea9c22707a-scripts\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.400228 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstsf\" (UniqueName: \"kubernetes.io/projected/2b894593-0963-4476-8329-daea9c22707a-kube-api-access-bstsf\") pod \"ovn-controller-ovs-5mc4s\" (UID: \"2b894593-0963-4476-8329-daea9c22707a\") " pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.400943 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg8mg\" (UniqueName: \"kubernetes.io/projected/1000feb0-a866-42c2-974e-cd95329589e2-kube-api-access-cg8mg\") pod \"ovn-controller-w64dv\" (UID: \"1000feb0-a866-42c2-974e-cd95329589e2\") " pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.548487 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w64dv" Jan 29 08:10:39 crc kubenswrapper[5017]: I0129 08:10:39.571584 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:40 crc kubenswrapper[5017]: I0129 08:10:40.101781 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w64dv"] Jan 29 08:10:40 crc kubenswrapper[5017]: I0129 08:10:40.545026 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5mc4s"] Jan 29 08:10:40 crc kubenswrapper[5017]: I0129 08:10:40.985482 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5mc4s" event={"ID":"2b894593-0963-4476-8329-daea9c22707a","Type":"ContainerStarted","Data":"bd4aa749685be39af959e72c1044f25d360c244a83a3e35016e58d32851d26d1"} Jan 29 08:10:40 crc kubenswrapper[5017]: I0129 08:10:40.986052 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5mc4s" event={"ID":"2b894593-0963-4476-8329-daea9c22707a","Type":"ContainerStarted","Data":"ea3b231115812fa5dfc400051692c49118250845c36ef566bc376bfa40661422"} Jan 29 08:10:40 crc kubenswrapper[5017]: I0129 08:10:40.999004 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w64dv" event={"ID":"1000feb0-a866-42c2-974e-cd95329589e2","Type":"ContainerStarted","Data":"288e2da0bd3d20846ee741df3e98a014b5999ba23ecf2faa700fea4ff0678ce3"} Jan 29 08:10:40 crc kubenswrapper[5017]: I0129 08:10:40.999065 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w64dv" event={"ID":"1000feb0-a866-42c2-974e-cd95329589e2","Type":"ContainerStarted","Data":"9e98bb4aaf96539229a0498302bb4467ec48c623000ca4eacef827484b499d29"} Jan 29 08:10:40 crc kubenswrapper[5017]: I0129 08:10:40.999183 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-w64dv" Jan 29 08:10:41 crc kubenswrapper[5017]: I0129 08:10:41.032603 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-w64dv" podStartSLOduration=2.032568836 podStartE2EDuration="2.032568836s" podCreationTimestamp="2026-01-29 08:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:10:41.024671906 +0000 UTC m=+5727.399119516" watchObservedRunningTime="2026-01-29 08:10:41.032568836 +0000 UTC m=+5727.407016446" Jan 29 08:10:41 crc kubenswrapper[5017]: I0129 08:10:41.874759 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6xjl6"] Jan 29 08:10:41 crc kubenswrapper[5017]: I0129 08:10:41.880834 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:41 crc kubenswrapper[5017]: I0129 08:10:41.883201 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 08:10:41 crc kubenswrapper[5017]: I0129 08:10:41.933634 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6xjl6"] Jan 29 08:10:41 crc kubenswrapper[5017]: I0129 08:10:41.941367 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-ovs-rundir\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:41 crc kubenswrapper[5017]: I0129 08:10:41.941432 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-ovn-rundir\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:41 crc kubenswrapper[5017]: I0129 08:10:41.941496 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xct5d\" (UniqueName: \"kubernetes.io/projected/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-kube-api-access-xct5d\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:41 crc kubenswrapper[5017]: I0129 08:10:41.941578 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-config\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.017511 5017 generic.go:334] "Generic (PLEG): container finished" podID="2b894593-0963-4476-8329-daea9c22707a" containerID="bd4aa749685be39af959e72c1044f25d360c244a83a3e35016e58d32851d26d1" exitCode=0 Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.017693 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5mc4s" event={"ID":"2b894593-0963-4476-8329-daea9c22707a","Type":"ContainerDied","Data":"bd4aa749685be39af959e72c1044f25d360c244a83a3e35016e58d32851d26d1"} Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.046401 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-ovs-rundir\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.046482 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-ovn-rundir\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.046556 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xct5d\" (UniqueName: \"kubernetes.io/projected/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-kube-api-access-xct5d\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.046644 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-config\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.047335 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-ovs-rundir\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.047348 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-ovn-rundir\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.047763 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-config\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.071247 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xct5d\" (UniqueName: \"kubernetes.io/projected/fec6292f-122b-4f11-a3c0-a4d0bdb0303f-kube-api-access-xct5d\") pod \"ovn-controller-metrics-6xjl6\" (UID: \"fec6292f-122b-4f11-a3c0-a4d0bdb0303f\") " pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.269850 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6xjl6" Jan 29 08:10:42 crc kubenswrapper[5017]: I0129 08:10:42.748485 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6xjl6"] Jan 29 08:10:42 crc kubenswrapper[5017]: W0129 08:10:42.749626 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfec6292f_122b_4f11_a3c0_a4d0bdb0303f.slice/crio-79f23afbdb93fd16d2f6694bfc321bf20858fd9c9bb8d77f442123a9f02c0fbe WatchSource:0}: Error finding container 79f23afbdb93fd16d2f6694bfc321bf20858fd9c9bb8d77f442123a9f02c0fbe: Status 404 returned error can't find the container with id 79f23afbdb93fd16d2f6694bfc321bf20858fd9c9bb8d77f442123a9f02c0fbe Jan 29 08:10:43 crc kubenswrapper[5017]: I0129 08:10:43.035934 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6xjl6" event={"ID":"fec6292f-122b-4f11-a3c0-a4d0bdb0303f","Type":"ContainerStarted","Data":"8ee64d47ed1ed54f64a8a3c9fce5810c05b02b4ed28de4d95a56e7b7ed6dbd86"} Jan 29 08:10:43 crc kubenswrapper[5017]: I0129 08:10:43.036034 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6xjl6" event={"ID":"fec6292f-122b-4f11-a3c0-a4d0bdb0303f","Type":"ContainerStarted","Data":"79f23afbdb93fd16d2f6694bfc321bf20858fd9c9bb8d77f442123a9f02c0fbe"} Jan 29 08:10:43 crc kubenswrapper[5017]: I0129 08:10:43.042242 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5mc4s" event={"ID":"2b894593-0963-4476-8329-daea9c22707a","Type":"ContainerStarted","Data":"f39c593d2c370928f46d973c79423ac97b3d5497e7b8768839cc128d32da003b"} Jan 29 08:10:43 crc kubenswrapper[5017]: I0129 08:10:43.042286 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5mc4s" event={"ID":"2b894593-0963-4476-8329-daea9c22707a","Type":"ContainerStarted","Data":"36facba5110eed8031aa48c75085b5a94057263e2fdbe86e6d09ad22ae937b91"} Jan 29 08:10:43 crc kubenswrapper[5017]: I0129 08:10:43.042914 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:43 crc kubenswrapper[5017]: I0129 08:10:43.042947 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:10:43 crc kubenswrapper[5017]: I0129 08:10:43.098620 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6xjl6" podStartSLOduration=2.098589078 podStartE2EDuration="2.098589078s" podCreationTimestamp="2026-01-29 08:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:10:43.05743073 +0000 UTC m=+5729.431878350" watchObservedRunningTime="2026-01-29 08:10:43.098589078 +0000 UTC m=+5729.473036698" Jan 29 08:10:43 crc kubenswrapper[5017]: I0129 08:10:43.101276 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5mc4s" podStartSLOduration=4.101261672 podStartE2EDuration="4.101261672s" podCreationTimestamp="2026-01-29 08:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:10:43.085821641 +0000 UTC m=+5729.460269261" watchObservedRunningTime="2026-01-29 08:10:43.101261672 +0000 UTC m=+5729.475709292" Jan 29 08:10:44 crc kubenswrapper[5017]: I0129 08:10:44.038667 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x9j8w"] Jan 29 08:10:44 crc kubenswrapper[5017]: I0129 08:10:44.054366 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x9j8w"] Jan 29 08:10:44 crc kubenswrapper[5017]: I0129 08:10:44.329885 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d74e8c79-a281-42a3-b709-3045966eea64" path="/var/lib/kubelet/pods/d74e8c79-a281-42a3-b709-3045966eea64/volumes" Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.317776 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:10:51 crc kubenswrapper[5017]: E0129 08:10:51.319107 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.375563 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-prg9k"] Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.377115 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.387864 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-prg9k"] Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.455031 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0f87d4-8e6f-4b13-a018-66f3317394b1-operator-scripts\") pod \"octavia-db-create-prg9k\" (UID: \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\") " pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.455591 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ch2c\" (UniqueName: \"kubernetes.io/projected/ac0f87d4-8e6f-4b13-a018-66f3317394b1-kube-api-access-9ch2c\") pod \"octavia-db-create-prg9k\" (UID: \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\") " pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.557034 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ch2c\" (UniqueName: \"kubernetes.io/projected/ac0f87d4-8e6f-4b13-a018-66f3317394b1-kube-api-access-9ch2c\") pod \"octavia-db-create-prg9k\" (UID: \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\") " pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.557154 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0f87d4-8e6f-4b13-a018-66f3317394b1-operator-scripts\") pod \"octavia-db-create-prg9k\" (UID: \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\") " pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.558066 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0f87d4-8e6f-4b13-a018-66f3317394b1-operator-scripts\") pod \"octavia-db-create-prg9k\" (UID: \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\") " pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.596576 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ch2c\" (UniqueName: \"kubernetes.io/projected/ac0f87d4-8e6f-4b13-a018-66f3317394b1-kube-api-access-9ch2c\") pod \"octavia-db-create-prg9k\" (UID: \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\") " pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:51 crc kubenswrapper[5017]: I0129 08:10:51.722399 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.251003 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-prg9k"] Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.735371 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-7614-account-create-update-7jpwv"] Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.737378 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.741842 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.752643 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-7614-account-create-update-7jpwv"] Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.889842 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddccbfd-933b-453c-9c4c-091c2404f994-operator-scripts\") pod \"octavia-7614-account-create-update-7jpwv\" (UID: \"6ddccbfd-933b-453c-9c4c-091c2404f994\") " pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.889901 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxc2\" (UniqueName: \"kubernetes.io/projected/6ddccbfd-933b-453c-9c4c-091c2404f994-kube-api-access-vbxc2\") pod \"octavia-7614-account-create-update-7jpwv\" (UID: \"6ddccbfd-933b-453c-9c4c-091c2404f994\") " pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.992373 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddccbfd-933b-453c-9c4c-091c2404f994-operator-scripts\") pod \"octavia-7614-account-create-update-7jpwv\" (UID: \"6ddccbfd-933b-453c-9c4c-091c2404f994\") " pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.992424 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxc2\" (UniqueName: \"kubernetes.io/projected/6ddccbfd-933b-453c-9c4c-091c2404f994-kube-api-access-vbxc2\") pod \"octavia-7614-account-create-update-7jpwv\" (UID: \"6ddccbfd-933b-453c-9c4c-091c2404f994\") " pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:52 crc kubenswrapper[5017]: I0129 08:10:52.993564 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddccbfd-933b-453c-9c4c-091c2404f994-operator-scripts\") pod \"octavia-7614-account-create-update-7jpwv\" (UID: \"6ddccbfd-933b-453c-9c4c-091c2404f994\") " pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:53 crc kubenswrapper[5017]: I0129 08:10:53.017908 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxc2\" (UniqueName: \"kubernetes.io/projected/6ddccbfd-933b-453c-9c4c-091c2404f994-kube-api-access-vbxc2\") pod \"octavia-7614-account-create-update-7jpwv\" (UID: \"6ddccbfd-933b-453c-9c4c-091c2404f994\") " pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:53 crc kubenswrapper[5017]: I0129 08:10:53.057526 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:53 crc kubenswrapper[5017]: I0129 08:10:53.143663 5017 generic.go:334] "Generic (PLEG): container finished" podID="ac0f87d4-8e6f-4b13-a018-66f3317394b1" containerID="0284053e0bcbf9c733af4caf2f2bcefcef7b91bfe8689ea43a82bb38362bd296" exitCode=0 Jan 29 08:10:53 crc kubenswrapper[5017]: I0129 08:10:53.143761 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-prg9k" event={"ID":"ac0f87d4-8e6f-4b13-a018-66f3317394b1","Type":"ContainerDied","Data":"0284053e0bcbf9c733af4caf2f2bcefcef7b91bfe8689ea43a82bb38362bd296"} Jan 29 08:10:53 crc kubenswrapper[5017]: I0129 08:10:53.143805 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-prg9k" event={"ID":"ac0f87d4-8e6f-4b13-a018-66f3317394b1","Type":"ContainerStarted","Data":"10572dd6ab35342362bc50e6fd4d7d04aee90cb6dd5bc36203ae07535b404000"} Jan 29 08:10:53 crc kubenswrapper[5017]: I0129 08:10:53.502600 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-7614-account-create-update-7jpwv"] Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.163169 5017 generic.go:334] "Generic (PLEG): container finished" podID="6ddccbfd-933b-453c-9c4c-091c2404f994" containerID="c4d2e5b9e1727f5c04e990b568b63b31e5b187b33b553df593fd911de1babc42" exitCode=0 Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.163242 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-7614-account-create-update-7jpwv" event={"ID":"6ddccbfd-933b-453c-9c4c-091c2404f994","Type":"ContainerDied","Data":"c4d2e5b9e1727f5c04e990b568b63b31e5b187b33b553df593fd911de1babc42"} Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.163669 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-7614-account-create-update-7jpwv" event={"ID":"6ddccbfd-933b-453c-9c4c-091c2404f994","Type":"ContainerStarted","Data":"c36c6378e240a5f9eca900e0eec68d26f8dc26a422d6d43001113dca2daa7fc7"} Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.569252 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.734048 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ch2c\" (UniqueName: \"kubernetes.io/projected/ac0f87d4-8e6f-4b13-a018-66f3317394b1-kube-api-access-9ch2c\") pod \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\" (UID: \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\") " Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.734761 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0f87d4-8e6f-4b13-a018-66f3317394b1-operator-scripts\") pod \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\" (UID: \"ac0f87d4-8e6f-4b13-a018-66f3317394b1\") " Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.735291 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0f87d4-8e6f-4b13-a018-66f3317394b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac0f87d4-8e6f-4b13-a018-66f3317394b1" (UID: "ac0f87d4-8e6f-4b13-a018-66f3317394b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.735554 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0f87d4-8e6f-4b13-a018-66f3317394b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.740390 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0f87d4-8e6f-4b13-a018-66f3317394b1-kube-api-access-9ch2c" (OuterVolumeSpecName: "kube-api-access-9ch2c") pod "ac0f87d4-8e6f-4b13-a018-66f3317394b1" (UID: "ac0f87d4-8e6f-4b13-a018-66f3317394b1"). InnerVolumeSpecName "kube-api-access-9ch2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:10:54 crc kubenswrapper[5017]: I0129 08:10:54.838272 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ch2c\" (UniqueName: \"kubernetes.io/projected/ac0f87d4-8e6f-4b13-a018-66f3317394b1-kube-api-access-9ch2c\") on node \"crc\" DevicePath \"\"" Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.175403 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-prg9k" event={"ID":"ac0f87d4-8e6f-4b13-a018-66f3317394b1","Type":"ContainerDied","Data":"10572dd6ab35342362bc50e6fd4d7d04aee90cb6dd5bc36203ae07535b404000"} Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.175485 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10572dd6ab35342362bc50e6fd4d7d04aee90cb6dd5bc36203ae07535b404000" Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.175424 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-prg9k" Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.583664 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.658687 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxc2\" (UniqueName: \"kubernetes.io/projected/6ddccbfd-933b-453c-9c4c-091c2404f994-kube-api-access-vbxc2\") pod \"6ddccbfd-933b-453c-9c4c-091c2404f994\" (UID: \"6ddccbfd-933b-453c-9c4c-091c2404f994\") " Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.658814 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddccbfd-933b-453c-9c4c-091c2404f994-operator-scripts\") pod \"6ddccbfd-933b-453c-9c4c-091c2404f994\" (UID: \"6ddccbfd-933b-453c-9c4c-091c2404f994\") " Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.660192 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ddccbfd-933b-453c-9c4c-091c2404f994-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ddccbfd-933b-453c-9c4c-091c2404f994" (UID: "6ddccbfd-933b-453c-9c4c-091c2404f994"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.665299 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ddccbfd-933b-453c-9c4c-091c2404f994-kube-api-access-vbxc2" (OuterVolumeSpecName: "kube-api-access-vbxc2") pod "6ddccbfd-933b-453c-9c4c-091c2404f994" (UID: "6ddccbfd-933b-453c-9c4c-091c2404f994"). InnerVolumeSpecName "kube-api-access-vbxc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.760391 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxc2\" (UniqueName: \"kubernetes.io/projected/6ddccbfd-933b-453c-9c4c-091c2404f994-kube-api-access-vbxc2\") on node \"crc\" DevicePath \"\"" Jan 29 08:10:55 crc kubenswrapper[5017]: I0129 08:10:55.760432 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddccbfd-933b-453c-9c4c-091c2404f994-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:10:56 crc kubenswrapper[5017]: I0129 08:10:56.188007 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-7614-account-create-update-7jpwv" event={"ID":"6ddccbfd-933b-453c-9c4c-091c2404f994","Type":"ContainerDied","Data":"c36c6378e240a5f9eca900e0eec68d26f8dc26a422d6d43001113dca2daa7fc7"} Jan 29 08:10:56 crc kubenswrapper[5017]: I0129 08:10:56.188064 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c36c6378e240a5f9eca900e0eec68d26f8dc26a422d6d43001113dca2daa7fc7" Jan 29 08:10:56 crc kubenswrapper[5017]: I0129 08:10:56.188087 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-7614-account-create-update-7jpwv" Jan 29 08:10:57 crc kubenswrapper[5017]: I0129 08:10:57.954507 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-hztpq"] Jan 29 08:10:57 crc kubenswrapper[5017]: E0129 08:10:57.955532 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0f87d4-8e6f-4b13-a018-66f3317394b1" containerName="mariadb-database-create" Jan 29 08:10:57 crc kubenswrapper[5017]: I0129 08:10:57.955551 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0f87d4-8e6f-4b13-a018-66f3317394b1" containerName="mariadb-database-create" Jan 29 08:10:57 crc kubenswrapper[5017]: E0129 08:10:57.955567 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ddccbfd-933b-453c-9c4c-091c2404f994" containerName="mariadb-account-create-update" Jan 29 08:10:57 crc kubenswrapper[5017]: I0129 08:10:57.955575 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ddccbfd-933b-453c-9c4c-091c2404f994" containerName="mariadb-account-create-update" Jan 29 08:10:57 crc kubenswrapper[5017]: I0129 08:10:57.955767 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0f87d4-8e6f-4b13-a018-66f3317394b1" containerName="mariadb-database-create" Jan 29 08:10:57 crc kubenswrapper[5017]: I0129 08:10:57.955788 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ddccbfd-933b-453c-9c4c-091c2404f994" containerName="mariadb-account-create-update" Jan 29 08:10:57 crc kubenswrapper[5017]: I0129 08:10:57.956593 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:10:57 crc kubenswrapper[5017]: I0129 08:10:57.965200 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-hztpq"] Jan 29 08:10:58 crc kubenswrapper[5017]: I0129 08:10:58.109079 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjwc\" (UniqueName: \"kubernetes.io/projected/58e53154-f239-4c66-a96b-0c32a3304e57-kube-api-access-sjjwc\") pod \"octavia-persistence-db-create-hztpq\" (UID: \"58e53154-f239-4c66-a96b-0c32a3304e57\") " pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:10:58 crc kubenswrapper[5017]: I0129 08:10:58.110136 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e53154-f239-4c66-a96b-0c32a3304e57-operator-scripts\") pod \"octavia-persistence-db-create-hztpq\" (UID: \"58e53154-f239-4c66-a96b-0c32a3304e57\") " pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:10:58 crc kubenswrapper[5017]: I0129 08:10:58.212104 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e53154-f239-4c66-a96b-0c32a3304e57-operator-scripts\") pod \"octavia-persistence-db-create-hztpq\" (UID: \"58e53154-f239-4c66-a96b-0c32a3304e57\") " pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:10:58 crc kubenswrapper[5017]: I0129 08:10:58.212265 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjwc\" (UniqueName: \"kubernetes.io/projected/58e53154-f239-4c66-a96b-0c32a3304e57-kube-api-access-sjjwc\") pod \"octavia-persistence-db-create-hztpq\" (UID: \"58e53154-f239-4c66-a96b-0c32a3304e57\") " pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:10:58 crc kubenswrapper[5017]: I0129 08:10:58.213102 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e53154-f239-4c66-a96b-0c32a3304e57-operator-scripts\") pod \"octavia-persistence-db-create-hztpq\" (UID: \"58e53154-f239-4c66-a96b-0c32a3304e57\") " pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:10:58 crc kubenswrapper[5017]: I0129 08:10:58.232144 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjwc\" (UniqueName: \"kubernetes.io/projected/58e53154-f239-4c66-a96b-0c32a3304e57-kube-api-access-sjjwc\") pod \"octavia-persistence-db-create-hztpq\" (UID: \"58e53154-f239-4c66-a96b-0c32a3304e57\") " pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:10:58 crc kubenswrapper[5017]: I0129 08:10:58.288340 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:10:58 crc kubenswrapper[5017]: I0129 08:10:58.753004 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-hztpq"] Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.215505 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-59ae-account-create-update-hwgd7"] Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.217091 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.220003 5017 generic.go:334] "Generic (PLEG): container finished" podID="58e53154-f239-4c66-a96b-0c32a3304e57" containerID="208ab7815c4c5ec09b9043332779e1feeb5d9179bb36663f6afdbc2801f3a1b5" exitCode=0 Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.220067 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-hztpq" event={"ID":"58e53154-f239-4c66-a96b-0c32a3304e57","Type":"ContainerDied","Data":"208ab7815c4c5ec09b9043332779e1feeb5d9179bb36663f6afdbc2801f3a1b5"} Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.220201 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-hztpq" event={"ID":"58e53154-f239-4c66-a96b-0c32a3304e57","Type":"ContainerStarted","Data":"7a3703bd666edb2c05f383335f449ff3c71abed7b5dcffdcb4b7878673982421"} Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.223764 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.229312 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-59ae-account-create-update-hwgd7"] Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.336476 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzx6k\" (UniqueName: \"kubernetes.io/projected/664cf5ff-9de5-45be-8778-fa2ac737b9a8-kube-api-access-hzx6k\") pod \"octavia-59ae-account-create-update-hwgd7\" (UID: \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\") " pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.336832 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664cf5ff-9de5-45be-8778-fa2ac737b9a8-operator-scripts\") pod \"octavia-59ae-account-create-update-hwgd7\" (UID: \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\") " pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.440335 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzx6k\" (UniqueName: \"kubernetes.io/projected/664cf5ff-9de5-45be-8778-fa2ac737b9a8-kube-api-access-hzx6k\") pod \"octavia-59ae-account-create-update-hwgd7\" (UID: \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\") " pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.440785 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664cf5ff-9de5-45be-8778-fa2ac737b9a8-operator-scripts\") pod \"octavia-59ae-account-create-update-hwgd7\" (UID: \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\") " pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.441614 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664cf5ff-9de5-45be-8778-fa2ac737b9a8-operator-scripts\") pod \"octavia-59ae-account-create-update-hwgd7\" (UID: \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\") " pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.483200 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzx6k\" (UniqueName: \"kubernetes.io/projected/664cf5ff-9de5-45be-8778-fa2ac737b9a8-kube-api-access-hzx6k\") pod \"octavia-59ae-account-create-update-hwgd7\" (UID: \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\") " pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:10:59 crc kubenswrapper[5017]: I0129 08:10:59.542901 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:11:00 crc kubenswrapper[5017]: I0129 08:11:00.090172 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-59ae-account-create-update-hwgd7"] Jan 29 08:11:00 crc kubenswrapper[5017]: I0129 08:11:00.229989 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-59ae-account-create-update-hwgd7" event={"ID":"664cf5ff-9de5-45be-8778-fa2ac737b9a8","Type":"ContainerStarted","Data":"ba85c29e63f692b8cfec2430b0ce309a0b24a5140f9660324abc4258d1bc057e"} Jan 29 08:11:00 crc kubenswrapper[5017]: I0129 08:11:00.461709 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:11:00 crc kubenswrapper[5017]: I0129 08:11:00.572903 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e53154-f239-4c66-a96b-0c32a3304e57-operator-scripts\") pod \"58e53154-f239-4c66-a96b-0c32a3304e57\" (UID: \"58e53154-f239-4c66-a96b-0c32a3304e57\") " Jan 29 08:11:00 crc kubenswrapper[5017]: I0129 08:11:00.573019 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjjwc\" (UniqueName: \"kubernetes.io/projected/58e53154-f239-4c66-a96b-0c32a3304e57-kube-api-access-sjjwc\") pod \"58e53154-f239-4c66-a96b-0c32a3304e57\" (UID: \"58e53154-f239-4c66-a96b-0c32a3304e57\") " Jan 29 08:11:00 crc kubenswrapper[5017]: I0129 08:11:00.573893 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e53154-f239-4c66-a96b-0c32a3304e57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58e53154-f239-4c66-a96b-0c32a3304e57" (UID: "58e53154-f239-4c66-a96b-0c32a3304e57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:00 crc kubenswrapper[5017]: I0129 08:11:00.583339 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e53154-f239-4c66-a96b-0c32a3304e57-kube-api-access-sjjwc" (OuterVolumeSpecName: "kube-api-access-sjjwc") pod "58e53154-f239-4c66-a96b-0c32a3304e57" (UID: "58e53154-f239-4c66-a96b-0c32a3304e57"). InnerVolumeSpecName "kube-api-access-sjjwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:11:00 crc kubenswrapper[5017]: I0129 08:11:00.675658 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e53154-f239-4c66-a96b-0c32a3304e57-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:00 crc kubenswrapper[5017]: I0129 08:11:00.675694 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjjwc\" (UniqueName: \"kubernetes.io/projected/58e53154-f239-4c66-a96b-0c32a3304e57-kube-api-access-sjjwc\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:01 crc kubenswrapper[5017]: I0129 08:11:01.242859 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-hztpq" Jan 29 08:11:01 crc kubenswrapper[5017]: I0129 08:11:01.242913 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-hztpq" event={"ID":"58e53154-f239-4c66-a96b-0c32a3304e57","Type":"ContainerDied","Data":"7a3703bd666edb2c05f383335f449ff3c71abed7b5dcffdcb4b7878673982421"} Jan 29 08:11:01 crc kubenswrapper[5017]: I0129 08:11:01.243542 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a3703bd666edb2c05f383335f449ff3c71abed7b5dcffdcb4b7878673982421" Jan 29 08:11:01 crc kubenswrapper[5017]: I0129 08:11:01.247704 5017 generic.go:334] "Generic (PLEG): container finished" podID="664cf5ff-9de5-45be-8778-fa2ac737b9a8" containerID="abb35b3863af14179d46b4fe45c3d612a5f0a84e52da36ccad47176957057857" exitCode=0 Jan 29 08:11:01 crc kubenswrapper[5017]: I0129 08:11:01.247768 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-59ae-account-create-update-hwgd7" event={"ID":"664cf5ff-9de5-45be-8778-fa2ac737b9a8","Type":"ContainerDied","Data":"abb35b3863af14179d46b4fe45c3d612a5f0a84e52da36ccad47176957057857"} Jan 29 08:11:02 crc kubenswrapper[5017]: I0129 08:11:02.641303 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:11:02 crc kubenswrapper[5017]: I0129 08:11:02.823817 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzx6k\" (UniqueName: \"kubernetes.io/projected/664cf5ff-9de5-45be-8778-fa2ac737b9a8-kube-api-access-hzx6k\") pod \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\" (UID: \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\") " Jan 29 08:11:02 crc kubenswrapper[5017]: I0129 08:11:02.823985 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664cf5ff-9de5-45be-8778-fa2ac737b9a8-operator-scripts\") pod \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\" (UID: \"664cf5ff-9de5-45be-8778-fa2ac737b9a8\") " Jan 29 08:11:02 crc kubenswrapper[5017]: I0129 08:11:02.825207 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664cf5ff-9de5-45be-8778-fa2ac737b9a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "664cf5ff-9de5-45be-8778-fa2ac737b9a8" (UID: "664cf5ff-9de5-45be-8778-fa2ac737b9a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:02 crc kubenswrapper[5017]: I0129 08:11:02.830488 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664cf5ff-9de5-45be-8778-fa2ac737b9a8-kube-api-access-hzx6k" (OuterVolumeSpecName: "kube-api-access-hzx6k") pod "664cf5ff-9de5-45be-8778-fa2ac737b9a8" (UID: "664cf5ff-9de5-45be-8778-fa2ac737b9a8"). InnerVolumeSpecName "kube-api-access-hzx6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:11:02 crc kubenswrapper[5017]: I0129 08:11:02.926696 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzx6k\" (UniqueName: \"kubernetes.io/projected/664cf5ff-9de5-45be-8778-fa2ac737b9a8-kube-api-access-hzx6k\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:02 crc kubenswrapper[5017]: I0129 08:11:02.926741 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664cf5ff-9de5-45be-8778-fa2ac737b9a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:03 crc kubenswrapper[5017]: I0129 08:11:03.270258 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-59ae-account-create-update-hwgd7" event={"ID":"664cf5ff-9de5-45be-8778-fa2ac737b9a8","Type":"ContainerDied","Data":"ba85c29e63f692b8cfec2430b0ce309a0b24a5140f9660324abc4258d1bc057e"} Jan 29 08:11:03 crc kubenswrapper[5017]: I0129 08:11:03.270793 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba85c29e63f692b8cfec2430b0ce309a0b24a5140f9660324abc4258d1bc057e" Jan 29 08:11:03 crc kubenswrapper[5017]: I0129 08:11:03.270331 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-59ae-account-create-update-hwgd7" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.903024 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-86dc7d4d88-5x6wk"] Jan 29 08:11:04 crc kubenswrapper[5017]: E0129 08:11:04.903552 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e53154-f239-4c66-a96b-0c32a3304e57" containerName="mariadb-database-create" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.903571 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e53154-f239-4c66-a96b-0c32a3304e57" containerName="mariadb-database-create" Jan 29 08:11:04 crc kubenswrapper[5017]: E0129 08:11:04.903607 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664cf5ff-9de5-45be-8778-fa2ac737b9a8" containerName="mariadb-account-create-update" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.903614 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="664cf5ff-9de5-45be-8778-fa2ac737b9a8" containerName="mariadb-account-create-update" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.903862 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e53154-f239-4c66-a96b-0c32a3304e57" containerName="mariadb-database-create" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.903881 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="664cf5ff-9de5-45be-8778-fa2ac737b9a8" containerName="mariadb-account-create-update" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.905445 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.908601 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.909877 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.910135 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-2882h" Jan 29 08:11:04 crc kubenswrapper[5017]: I0129 08:11:04.924080 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-86dc7d4d88-5x6wk"] Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.076542 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056489f1-b498-496c-87dc-478bc8df163d-config-data\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.077107 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056489f1-b498-496c-87dc-478bc8df163d-combined-ca-bundle\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.077324 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/056489f1-b498-496c-87dc-478bc8df163d-config-data-merged\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.077383 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/056489f1-b498-496c-87dc-478bc8df163d-octavia-run\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.077421 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056489f1-b498-496c-87dc-478bc8df163d-scripts\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.178550 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/056489f1-b498-496c-87dc-478bc8df163d-config-data-merged\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.178629 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/056489f1-b498-496c-87dc-478bc8df163d-octavia-run\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.178682 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056489f1-b498-496c-87dc-478bc8df163d-scripts\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.178730 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056489f1-b498-496c-87dc-478bc8df163d-config-data\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.178754 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056489f1-b498-496c-87dc-478bc8df163d-combined-ca-bundle\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.179615 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/056489f1-b498-496c-87dc-478bc8df163d-octavia-run\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.179784 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/056489f1-b498-496c-87dc-478bc8df163d-config-data-merged\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.188898 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056489f1-b498-496c-87dc-478bc8df163d-combined-ca-bundle\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.190296 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056489f1-b498-496c-87dc-478bc8df163d-config-data\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.193724 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056489f1-b498-496c-87dc-478bc8df163d-scripts\") pod \"octavia-api-86dc7d4d88-5x6wk\" (UID: \"056489f1-b498-496c-87dc-478bc8df163d\") " pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.225348 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.317062 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:11:05 crc kubenswrapper[5017]: E0129 08:11:05.317518 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.768041 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-86dc7d4d88-5x6wk"] Jan 29 08:11:05 crc kubenswrapper[5017]: I0129 08:11:05.786411 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:11:06 crc kubenswrapper[5017]: I0129 08:11:06.306269 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-86dc7d4d88-5x6wk" event={"ID":"056489f1-b498-496c-87dc-478bc8df163d","Type":"ContainerStarted","Data":"e3e69b31b59224619541eec54cfcbf449ced85a6ebf0fc097128e90ba99cf96d"} Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.602441 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-w64dv" podUID="1000feb0-a866-42c2-974e-cd95329589e2" containerName="ovn-controller" probeResult="failure" output=< Jan 29 08:11:14 crc kubenswrapper[5017]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 08:11:14 crc kubenswrapper[5017]: > Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.624795 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.628179 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5mc4s" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.763704 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-w64dv-config-sp6cb"] Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.775069 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.779042 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.785310 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w64dv-config-sp6cb"] Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.819431 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.819547 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-scripts\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.819582 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run-ovn\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.819618 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cdz5\" (UniqueName: \"kubernetes.io/projected/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-kube-api-access-6cdz5\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.819718 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-additional-scripts\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.819817 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-log-ovn\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.922376 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-additional-scripts\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.922488 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-log-ovn\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.922528 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.922567 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-scripts\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.922588 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run-ovn\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.922608 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cdz5\" (UniqueName: \"kubernetes.io/projected/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-kube-api-access-6cdz5\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.924087 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-log-ovn\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.924298 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-additional-scripts\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.924383 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run-ovn\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.923945 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.926479 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-scripts\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:14 crc kubenswrapper[5017]: I0129 08:11:14.945388 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cdz5\" (UniqueName: \"kubernetes.io/projected/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-kube-api-access-6cdz5\") pod \"ovn-controller-w64dv-config-sp6cb\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:15 crc kubenswrapper[5017]: I0129 08:11:15.119923 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:15 crc kubenswrapper[5017]: I0129 08:11:15.403814 5017 generic.go:334] "Generic (PLEG): container finished" podID="056489f1-b498-496c-87dc-478bc8df163d" containerID="ca574bbe9ca4e32ad3d917e2fd49818c0c67c7ff189ae102866a5f52a85de228" exitCode=0 Jan 29 08:11:15 crc kubenswrapper[5017]: I0129 08:11:15.404018 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-86dc7d4d88-5x6wk" event={"ID":"056489f1-b498-496c-87dc-478bc8df163d","Type":"ContainerDied","Data":"ca574bbe9ca4e32ad3d917e2fd49818c0c67c7ff189ae102866a5f52a85de228"} Jan 29 08:11:15 crc kubenswrapper[5017]: I0129 08:11:15.652710 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-w64dv-config-sp6cb"] Jan 29 08:11:15 crc kubenswrapper[5017]: W0129 08:11:15.663537 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f8e6c1c_9146_46d1_9d17_cc554ebb2157.slice/crio-fcbda4663e482f2de36efe9136cfac2fb5f359eb1f3e96c0210c3d778df61a65 WatchSource:0}: Error finding container fcbda4663e482f2de36efe9136cfac2fb5f359eb1f3e96c0210c3d778df61a65: Status 404 returned error can't find the container with id fcbda4663e482f2de36efe9136cfac2fb5f359eb1f3e96c0210c3d778df61a65 Jan 29 08:11:16 crc kubenswrapper[5017]: I0129 08:11:16.415448 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w64dv-config-sp6cb" event={"ID":"0f8e6c1c-9146-46d1-9d17-cc554ebb2157","Type":"ContainerStarted","Data":"4ed07e67425ba675e32f83ef1f025dc546aeaa3feba3ad2510ccf9a02f16c6c0"} Jan 29 08:11:16 crc kubenswrapper[5017]: I0129 08:11:16.416197 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w64dv-config-sp6cb" event={"ID":"0f8e6c1c-9146-46d1-9d17-cc554ebb2157","Type":"ContainerStarted","Data":"fcbda4663e482f2de36efe9136cfac2fb5f359eb1f3e96c0210c3d778df61a65"} Jan 29 08:11:16 crc kubenswrapper[5017]: I0129 08:11:16.418865 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-86dc7d4d88-5x6wk" event={"ID":"056489f1-b498-496c-87dc-478bc8df163d","Type":"ContainerStarted","Data":"aec85868711f4d8d06a32993515134bb64b42fa7cdf6a9f38ec7ff80d3987784"} Jan 29 08:11:16 crc kubenswrapper[5017]: I0129 08:11:16.418919 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-86dc7d4d88-5x6wk" event={"ID":"056489f1-b498-496c-87dc-478bc8df163d","Type":"ContainerStarted","Data":"2fe8bfe0b4a5d55457291426fb31af07ca51685af9f78f5de06a510bfcc7b24a"} Jan 29 08:11:16 crc kubenswrapper[5017]: I0129 08:11:16.419667 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:16 crc kubenswrapper[5017]: I0129 08:11:16.419703 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:16 crc kubenswrapper[5017]: I0129 08:11:16.439192 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-w64dv-config-sp6cb" podStartSLOduration=2.439162672 podStartE2EDuration="2.439162672s" podCreationTimestamp="2026-01-29 08:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:11:16.434534191 +0000 UTC m=+5762.808981801" watchObservedRunningTime="2026-01-29 08:11:16.439162672 +0000 UTC m=+5762.813610282" Jan 29 08:11:16 crc kubenswrapper[5017]: I0129 08:11:16.462540 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-86dc7d4d88-5x6wk" podStartSLOduration=3.375460095 podStartE2EDuration="12.462517533s" podCreationTimestamp="2026-01-29 08:11:04 +0000 UTC" firstStartedPulling="2026-01-29 08:11:05.786124546 +0000 UTC m=+5752.160572156" lastFinishedPulling="2026-01-29 08:11:14.873181994 +0000 UTC m=+5761.247629594" observedRunningTime="2026-01-29 08:11:16.45742507 +0000 UTC m=+5762.831872680" watchObservedRunningTime="2026-01-29 08:11:16.462517533 +0000 UTC m=+5762.836965143" Jan 29 08:11:17 crc kubenswrapper[5017]: I0129 08:11:17.433409 5017 generic.go:334] "Generic (PLEG): container finished" podID="0f8e6c1c-9146-46d1-9d17-cc554ebb2157" containerID="4ed07e67425ba675e32f83ef1f025dc546aeaa3feba3ad2510ccf9a02f16c6c0" exitCode=0 Jan 29 08:11:17 crc kubenswrapper[5017]: I0129 08:11:17.434029 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w64dv-config-sp6cb" event={"ID":"0f8e6c1c-9146-46d1-9d17-cc554ebb2157","Type":"ContainerDied","Data":"4ed07e67425ba675e32f83ef1f025dc546aeaa3feba3ad2510ccf9a02f16c6c0"} Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.322143 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:11:18 crc kubenswrapper[5017]: E0129 08:11:18.322416 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.795906 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.819013 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-log-ovn\") pod \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.819263 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-scripts\") pod \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.819323 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-additional-scripts\") pod \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.819384 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run\") pod \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.819483 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cdz5\" (UniqueName: \"kubernetes.io/projected/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-kube-api-access-6cdz5\") pod \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.819508 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run-ovn\") pod \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\" (UID: \"0f8e6c1c-9146-46d1-9d17-cc554ebb2157\") " Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.820252 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0f8e6c1c-9146-46d1-9d17-cc554ebb2157" (UID: "0f8e6c1c-9146-46d1-9d17-cc554ebb2157"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.820302 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0f8e6c1c-9146-46d1-9d17-cc554ebb2157" (UID: "0f8e6c1c-9146-46d1-9d17-cc554ebb2157"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.821801 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-scripts" (OuterVolumeSpecName: "scripts") pod "0f8e6c1c-9146-46d1-9d17-cc554ebb2157" (UID: "0f8e6c1c-9146-46d1-9d17-cc554ebb2157"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.821843 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run" (OuterVolumeSpecName: "var-run") pod "0f8e6c1c-9146-46d1-9d17-cc554ebb2157" (UID: "0f8e6c1c-9146-46d1-9d17-cc554ebb2157"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.822640 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0f8e6c1c-9146-46d1-9d17-cc554ebb2157" (UID: "0f8e6c1c-9146-46d1-9d17-cc554ebb2157"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.829468 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-kube-api-access-6cdz5" (OuterVolumeSpecName: "kube-api-access-6cdz5") pod "0f8e6c1c-9146-46d1-9d17-cc554ebb2157" (UID: "0f8e6c1c-9146-46d1-9d17-cc554ebb2157"). InnerVolumeSpecName "kube-api-access-6cdz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.921473 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.921515 5017 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.921527 5017 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.921538 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cdz5\" (UniqueName: \"kubernetes.io/projected/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-kube-api-access-6cdz5\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.921548 5017 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:18 crc kubenswrapper[5017]: I0129 08:11:18.921556 5017 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f8e6c1c-9146-46d1-9d17-cc554ebb2157-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:19 crc kubenswrapper[5017]: I0129 08:11:19.456707 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-w64dv-config-sp6cb" event={"ID":"0f8e6c1c-9146-46d1-9d17-cc554ebb2157","Type":"ContainerDied","Data":"fcbda4663e482f2de36efe9136cfac2fb5f359eb1f3e96c0210c3d778df61a65"} Jan 29 08:11:19 crc kubenswrapper[5017]: I0129 08:11:19.456758 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcbda4663e482f2de36efe9136cfac2fb5f359eb1f3e96c0210c3d778df61a65" Jan 29 08:11:19 crc kubenswrapper[5017]: I0129 08:11:19.456782 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-w64dv-config-sp6cb" Jan 29 08:11:19 crc kubenswrapper[5017]: I0129 08:11:19.615180 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-w64dv-config-sp6cb"] Jan 29 08:11:19 crc kubenswrapper[5017]: I0129 08:11:19.628514 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-w64dv-config-sp6cb"] Jan 29 08:11:19 crc kubenswrapper[5017]: I0129 08:11:19.732981 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-w64dv" Jan 29 08:11:20 crc kubenswrapper[5017]: I0129 08:11:20.331760 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8e6c1c-9146-46d1-9d17-cc554ebb2157" path="/var/lib/kubelet/pods/0f8e6c1c-9146-46d1-9d17-cc554ebb2157/volumes" Jan 29 08:11:24 crc kubenswrapper[5017]: I0129 08:11:24.662645 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:24 crc kubenswrapper[5017]: I0129 08:11:24.672615 5017 scope.go:117] "RemoveContainer" containerID="c75d6eb225024f79242044e5a63a39d57c546d3a4e9edc7e4d920fc4a410fc6f" Jan 29 08:11:24 crc kubenswrapper[5017]: I0129 08:11:24.703417 5017 scope.go:117] "RemoveContainer" containerID="d5b952f26db971c66a61d7c380e6115a17242a0283b9efd05331830e338ca682" Jan 29 08:11:24 crc kubenswrapper[5017]: I0129 08:11:24.729098 5017 scope.go:117] "RemoveContainer" containerID="69f46fc963ea4a6f70f0336bf2676a4c11aaef5d461c0a295a7a74ea9aefb83b" Jan 29 08:11:24 crc kubenswrapper[5017]: I0129 08:11:24.822217 5017 scope.go:117] "RemoveContainer" containerID="4194c55d0ded9ac52934efd1495dfab80bf21021761c09a619c693339f745a46" Jan 29 08:11:24 crc kubenswrapper[5017]: I0129 08:11:24.984447 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-86dc7d4d88-5x6wk" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.335828 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-rnfsn"] Jan 29 08:11:26 crc kubenswrapper[5017]: E0129 08:11:26.336809 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8e6c1c-9146-46d1-9d17-cc554ebb2157" containerName="ovn-config" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.336827 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8e6c1c-9146-46d1-9d17-cc554ebb2157" containerName="ovn-config" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.337116 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8e6c1c-9146-46d1-9d17-cc554ebb2157" containerName="ovn-config" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.338193 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.342516 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.342567 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.342838 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.363335 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-rnfsn"] Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.399915 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11936bb3-0e5d-4dd4-af14-04753f575b6e-config-data\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.400106 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11936bb3-0e5d-4dd4-af14-04753f575b6e-scripts\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.400165 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/11936bb3-0e5d-4dd4-af14-04753f575b6e-config-data-merged\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.400195 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/11936bb3-0e5d-4dd4-af14-04753f575b6e-hm-ports\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.502346 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11936bb3-0e5d-4dd4-af14-04753f575b6e-scripts\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.502421 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/11936bb3-0e5d-4dd4-af14-04753f575b6e-config-data-merged\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.502462 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/11936bb3-0e5d-4dd4-af14-04753f575b6e-hm-ports\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.502569 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11936bb3-0e5d-4dd4-af14-04753f575b6e-config-data\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.503125 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/11936bb3-0e5d-4dd4-af14-04753f575b6e-config-data-merged\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.503792 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/11936bb3-0e5d-4dd4-af14-04753f575b6e-hm-ports\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.510019 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11936bb3-0e5d-4dd4-af14-04753f575b6e-scripts\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.510454 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11936bb3-0e5d-4dd4-af14-04753f575b6e-config-data\") pod \"octavia-rsyslog-rnfsn\" (UID: \"11936bb3-0e5d-4dd4-af14-04753f575b6e\") " pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.658698 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.971700 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-q6znx"] Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.975177 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:11:26 crc kubenswrapper[5017]: I0129 08:11:26.981751 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.003409 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-q6znx"] Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.016784 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2480eb1-510d-4a5a-b350-25d72d600e03-amphora-image\") pod \"octavia-image-upload-65dd99cb46-q6znx\" (UID: \"f2480eb1-510d-4a5a-b350-25d72d600e03\") " pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.016873 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2480eb1-510d-4a5a-b350-25d72d600e03-httpd-config\") pod \"octavia-image-upload-65dd99cb46-q6znx\" (UID: \"f2480eb1-510d-4a5a-b350-25d72d600e03\") " pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.118875 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2480eb1-510d-4a5a-b350-25d72d600e03-amphora-image\") pod \"octavia-image-upload-65dd99cb46-q6znx\" (UID: \"f2480eb1-510d-4a5a-b350-25d72d600e03\") " pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.118988 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2480eb1-510d-4a5a-b350-25d72d600e03-httpd-config\") pod \"octavia-image-upload-65dd99cb46-q6znx\" (UID: \"f2480eb1-510d-4a5a-b350-25d72d600e03\") " pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.119916 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2480eb1-510d-4a5a-b350-25d72d600e03-amphora-image\") pod \"octavia-image-upload-65dd99cb46-q6znx\" (UID: \"f2480eb1-510d-4a5a-b350-25d72d600e03\") " pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.126324 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2480eb1-510d-4a5a-b350-25d72d600e03-httpd-config\") pod \"octavia-image-upload-65dd99cb46-q6znx\" (UID: \"f2480eb1-510d-4a5a-b350-25d72d600e03\") " pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.268393 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-rnfsn"] Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.310654 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.424651 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-rnfsn"] Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.546686 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-rnfsn" event={"ID":"11936bb3-0e5d-4dd4-af14-04753f575b6e","Type":"ContainerStarted","Data":"93c55283d847b9867bbe40e04607e646d98c77f57b5dbd89d289ec0e225b8043"} Jan 29 08:11:27 crc kubenswrapper[5017]: I0129 08:11:27.806212 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-q6znx"] Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.370574 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-zk2s6"] Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.372509 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.375214 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.383971 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-zk2s6"] Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.447259 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.447856 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-combined-ca-bundle\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.447917 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data-merged\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.447979 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-scripts\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.550650 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.550799 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-combined-ca-bundle\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.550876 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data-merged\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.550922 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-scripts\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.552053 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data-merged\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.561366 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.563818 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-combined-ca-bundle\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.572988 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-scripts\") pod \"octavia-db-sync-zk2s6\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.586604 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" event={"ID":"f2480eb1-510d-4a5a-b350-25d72d600e03","Type":"ContainerStarted","Data":"e606f69ac414eea208536dbab6505a0f960297a0abd5b5190d9dbbd735225c4a"} Jan 29 08:11:28 crc kubenswrapper[5017]: I0129 08:11:28.714591 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:29 crc kubenswrapper[5017]: I0129 08:11:29.286081 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-zk2s6"] Jan 29 08:11:29 crc kubenswrapper[5017]: I0129 08:11:29.316585 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:11:29 crc kubenswrapper[5017]: E0129 08:11:29.316862 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:11:29 crc kubenswrapper[5017]: I0129 08:11:29.623094 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-rnfsn" event={"ID":"11936bb3-0e5d-4dd4-af14-04753f575b6e","Type":"ContainerStarted","Data":"2f06fd4b6fecc7a114d6a2a9ffd373cbf6ba2093c36f372107d2ad84ed29af74"} Jan 29 08:11:29 crc kubenswrapper[5017]: W0129 08:11:29.935071 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4abeeca4_8ab6_41ad_9aeb_9c00d087db86.slice/crio-8e985ea9ded25f1384a11e51566780ee9c667e6236f3ed45fd8c227250983740 WatchSource:0}: Error finding container 8e985ea9ded25f1384a11e51566780ee9c667e6236f3ed45fd8c227250983740: Status 404 returned error can't find the container with id 8e985ea9ded25f1384a11e51566780ee9c667e6236f3ed45fd8c227250983740 Jan 29 08:11:30 crc kubenswrapper[5017]: I0129 08:11:30.638532 5017 generic.go:334] "Generic (PLEG): container finished" podID="4abeeca4-8ab6-41ad-9aeb-9c00d087db86" containerID="a21279174b1094ccf35eb8f890251820e04277d63e591bc68d15fefc0bdffe74" exitCode=0 Jan 29 08:11:30 crc kubenswrapper[5017]: I0129 08:11:30.638666 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zk2s6" event={"ID":"4abeeca4-8ab6-41ad-9aeb-9c00d087db86","Type":"ContainerDied","Data":"a21279174b1094ccf35eb8f890251820e04277d63e591bc68d15fefc0bdffe74"} Jan 29 08:11:30 crc kubenswrapper[5017]: I0129 08:11:30.639512 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zk2s6" event={"ID":"4abeeca4-8ab6-41ad-9aeb-9c00d087db86","Type":"ContainerStarted","Data":"8e985ea9ded25f1384a11e51566780ee9c667e6236f3ed45fd8c227250983740"} Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.326897 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-kn7gl"] Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.329212 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.331570 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.332184 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.332544 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.367458 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-kn7gl"] Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.430298 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-amphora-certs\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.430757 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-scripts\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.430789 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3c0438e-2e9a-44d8-ac24-805d286c6256-config-data-merged\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.430813 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-combined-ca-bundle\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.430930 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e3c0438e-2e9a-44d8-ac24-805d286c6256-hm-ports\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.431010 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-config-data\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.533402 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e3c0438e-2e9a-44d8-ac24-805d286c6256-hm-ports\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.534853 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e3c0438e-2e9a-44d8-ac24-805d286c6256-hm-ports\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.535059 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-config-data\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.535224 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-amphora-certs\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.535253 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-scripts\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.536624 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3c0438e-2e9a-44d8-ac24-805d286c6256-config-data-merged\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.536721 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-combined-ca-bundle\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.537136 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3c0438e-2e9a-44d8-ac24-805d286c6256-config-data-merged\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.542934 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-amphora-certs\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.544799 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-config-data\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.546621 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-combined-ca-bundle\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.562884 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3c0438e-2e9a-44d8-ac24-805d286c6256-scripts\") pod \"octavia-healthmanager-kn7gl\" (UID: \"e3c0438e-2e9a-44d8-ac24-805d286c6256\") " pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.651802 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.671033 5017 generic.go:334] "Generic (PLEG): container finished" podID="11936bb3-0e5d-4dd4-af14-04753f575b6e" containerID="2f06fd4b6fecc7a114d6a2a9ffd373cbf6ba2093c36f372107d2ad84ed29af74" exitCode=0 Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.671118 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-rnfsn" event={"ID":"11936bb3-0e5d-4dd4-af14-04753f575b6e","Type":"ContainerDied","Data":"2f06fd4b6fecc7a114d6a2a9ffd373cbf6ba2093c36f372107d2ad84ed29af74"} Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.678200 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zk2s6" event={"ID":"4abeeca4-8ab6-41ad-9aeb-9c00d087db86","Type":"ContainerStarted","Data":"26760db5256f1972aa7a76449a9b3d494acc63a7cbd9fabb56d6c933f52d009e"} Jan 29 08:11:31 crc kubenswrapper[5017]: I0129 08:11:31.719873 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-zk2s6" podStartSLOduration=3.71984332 podStartE2EDuration="3.71984332s" podCreationTimestamp="2026-01-29 08:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:11:31.713825165 +0000 UTC m=+5778.088272775" watchObservedRunningTime="2026-01-29 08:11:31.71984332 +0000 UTC m=+5778.094290940" Jan 29 08:11:32 crc kubenswrapper[5017]: W0129 08:11:32.234713 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3c0438e_2e9a_44d8_ac24_805d286c6256.slice/crio-48ce260a2ea4844e195ee92733c3e23103cb4946e79695c1d5764a73de5cdc35 WatchSource:0}: Error finding container 48ce260a2ea4844e195ee92733c3e23103cb4946e79695c1d5764a73de5cdc35: Status 404 returned error can't find the container with id 48ce260a2ea4844e195ee92733c3e23103cb4946e79695c1d5764a73de5cdc35 Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.234744 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-kn7gl"] Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.479553 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-9jn9h"] Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.482333 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.485888 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.486587 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.498501 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-9jn9h"] Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.562015 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-combined-ca-bundle\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.562182 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-amphora-certs\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.562246 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-scripts\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.562285 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/0389f7bb-3e21-4689-8189-71761db6d516-hm-ports\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.562537 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-config-data\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.562951 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0389f7bb-3e21-4689-8189-71761db6d516-config-data-merged\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.666285 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-amphora-certs\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.666370 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-scripts\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.666417 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/0389f7bb-3e21-4689-8189-71761db6d516-hm-ports\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.666506 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-config-data\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.666551 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0389f7bb-3e21-4689-8189-71761db6d516-config-data-merged\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.666690 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-combined-ca-bundle\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.667419 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0389f7bb-3e21-4689-8189-71761db6d516-config-data-merged\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.668035 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/0389f7bb-3e21-4689-8189-71761db6d516-hm-ports\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.677650 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-scripts\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.677646 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-amphora-certs\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.677712 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-config-data\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.702300 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-kn7gl" event={"ID":"e3c0438e-2e9a-44d8-ac24-805d286c6256","Type":"ContainerStarted","Data":"48ce260a2ea4844e195ee92733c3e23103cb4946e79695c1d5764a73de5cdc35"} Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.765713 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0389f7bb-3e21-4689-8189-71761db6d516-combined-ca-bundle\") pod \"octavia-housekeeping-9jn9h\" (UID: \"0389f7bb-3e21-4689-8189-71761db6d516\") " pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:32 crc kubenswrapper[5017]: I0129 08:11:32.811825 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:33 crc kubenswrapper[5017]: I0129 08:11:33.723601 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-rnfsn" event={"ID":"11936bb3-0e5d-4dd4-af14-04753f575b6e","Type":"ContainerStarted","Data":"c70b7a14d3e2683bcab22358564cad39bfbd977354441b637a97327c9938cbea"} Jan 29 08:11:33 crc kubenswrapper[5017]: I0129 08:11:33.724843 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:33 crc kubenswrapper[5017]: I0129 08:11:33.742671 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-kn7gl" event={"ID":"e3c0438e-2e9a-44d8-ac24-805d286c6256","Type":"ContainerStarted","Data":"4d24671d4592751aaae5b3e0df39fc7182ba482a788a21d0af6c5597b7b3c63c"} Jan 29 08:11:33 crc kubenswrapper[5017]: I0129 08:11:33.761413 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-rnfsn" podStartSLOduration=1.707259133 podStartE2EDuration="7.761388525s" podCreationTimestamp="2026-01-29 08:11:26 +0000 UTC" firstStartedPulling="2026-01-29 08:11:27.28143533 +0000 UTC m=+5773.655882930" lastFinishedPulling="2026-01-29 08:11:33.335564712 +0000 UTC m=+5779.710012322" observedRunningTime="2026-01-29 08:11:33.751632101 +0000 UTC m=+5780.126079711" watchObservedRunningTime="2026-01-29 08:11:33.761388525 +0000 UTC m=+5780.135836135" Jan 29 08:11:33 crc kubenswrapper[5017]: W0129 08:11:33.795764 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0389f7bb_3e21_4689_8189_71761db6d516.slice/crio-37f57d8bb3659fdc43c1f39c805caecdc3b431f2fcfec1d93357fa3bd5405c5c WatchSource:0}: Error finding container 37f57d8bb3659fdc43c1f39c805caecdc3b431f2fcfec1d93357fa3bd5405c5c: Status 404 returned error can't find the container with id 37f57d8bb3659fdc43c1f39c805caecdc3b431f2fcfec1d93357fa3bd5405c5c Jan 29 08:11:33 crc kubenswrapper[5017]: I0129 08:11:33.831448 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-9jn9h"] Jan 29 08:11:34 crc kubenswrapper[5017]: I0129 08:11:34.757783 5017 generic.go:334] "Generic (PLEG): container finished" podID="4abeeca4-8ab6-41ad-9aeb-9c00d087db86" containerID="26760db5256f1972aa7a76449a9b3d494acc63a7cbd9fabb56d6c933f52d009e" exitCode=0 Jan 29 08:11:34 crc kubenswrapper[5017]: I0129 08:11:34.757888 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zk2s6" event={"ID":"4abeeca4-8ab6-41ad-9aeb-9c00d087db86","Type":"ContainerDied","Data":"26760db5256f1972aa7a76449a9b3d494acc63a7cbd9fabb56d6c933f52d009e"} Jan 29 08:11:34 crc kubenswrapper[5017]: I0129 08:11:34.760322 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9jn9h" event={"ID":"0389f7bb-3e21-4689-8189-71761db6d516","Type":"ContainerStarted","Data":"37f57d8bb3659fdc43c1f39c805caecdc3b431f2fcfec1d93357fa3bd5405c5c"} Jan 29 08:11:34 crc kubenswrapper[5017]: I0129 08:11:34.929659 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-kn7gl"] Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.565823 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-k6hjh"] Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.568319 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.570916 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.575371 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-k6hjh"] Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.576228 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.660391 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-combined-ca-bundle\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.660479 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-scripts\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.660506 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-config-data\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.660713 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/36eed554-1fc4-4d35-a541-9a46e00e727d-config-data-merged\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.660903 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/36eed554-1fc4-4d35-a541-9a46e00e727d-hm-ports\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.660993 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-amphora-certs\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.762803 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-combined-ca-bundle\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.762877 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-scripts\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.762902 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-config-data\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.762938 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/36eed554-1fc4-4d35-a541-9a46e00e727d-config-data-merged\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.762977 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/36eed554-1fc4-4d35-a541-9a46e00e727d-hm-ports\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.763018 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-amphora-certs\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.769222 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/36eed554-1fc4-4d35-a541-9a46e00e727d-config-data-merged\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.769458 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/36eed554-1fc4-4d35-a541-9a46e00e727d-hm-ports\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.770904 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-config-data\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.771791 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-combined-ca-bundle\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.791021 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-amphora-certs\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.791130 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36eed554-1fc4-4d35-a541-9a46e00e727d-scripts\") pod \"octavia-worker-k6hjh\" (UID: \"36eed554-1fc4-4d35-a541-9a46e00e727d\") " pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.852729 5017 generic.go:334] "Generic (PLEG): container finished" podID="e3c0438e-2e9a-44d8-ac24-805d286c6256" containerID="4d24671d4592751aaae5b3e0df39fc7182ba482a788a21d0af6c5597b7b3c63c" exitCode=0 Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.853194 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-kn7gl" event={"ID":"e3c0438e-2e9a-44d8-ac24-805d286c6256","Type":"ContainerDied","Data":"4d24671d4592751aaae5b3e0df39fc7182ba482a788a21d0af6c5597b7b3c63c"} Jan 29 08:11:35 crc kubenswrapper[5017]: I0129 08:11:35.903440 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:37 crc kubenswrapper[5017]: I0129 08:11:37.880385 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zk2s6" event={"ID":"4abeeca4-8ab6-41ad-9aeb-9c00d087db86","Type":"ContainerDied","Data":"8e985ea9ded25f1384a11e51566780ee9c667e6236f3ed45fd8c227250983740"} Jan 29 08:11:37 crc kubenswrapper[5017]: I0129 08:11:37.881423 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e985ea9ded25f1384a11e51566780ee9c667e6236f3ed45fd8c227250983740" Jan 29 08:11:37 crc kubenswrapper[5017]: I0129 08:11:37.954724 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.016535 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-combined-ca-bundle\") pod \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.016733 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-scripts\") pod \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.016756 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data\") pod \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.016936 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data-merged\") pod \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\" (UID: \"4abeeca4-8ab6-41ad-9aeb-9c00d087db86\") " Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.024611 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data" (OuterVolumeSpecName: "config-data") pod "4abeeca4-8ab6-41ad-9aeb-9c00d087db86" (UID: "4abeeca4-8ab6-41ad-9aeb-9c00d087db86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.026119 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-scripts" (OuterVolumeSpecName: "scripts") pod "4abeeca4-8ab6-41ad-9aeb-9c00d087db86" (UID: "4abeeca4-8ab6-41ad-9aeb-9c00d087db86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.042414 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "4abeeca4-8ab6-41ad-9aeb-9c00d087db86" (UID: "4abeeca4-8ab6-41ad-9aeb-9c00d087db86"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.056753 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4abeeca4-8ab6-41ad-9aeb-9c00d087db86" (UID: "4abeeca4-8ab6-41ad-9aeb-9c00d087db86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.120293 5017 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.120332 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.120346 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.120358 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4abeeca4-8ab6-41ad-9aeb-9c00d087db86-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:38 crc kubenswrapper[5017]: I0129 08:11:38.890756 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-zk2s6" Jan 29 08:11:39 crc kubenswrapper[5017]: I0129 08:11:39.552613 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-k6hjh"] Jan 29 08:11:39 crc kubenswrapper[5017]: I0129 08:11:39.907557 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-k6hjh" event={"ID":"36eed554-1fc4-4d35-a541-9a46e00e727d","Type":"ContainerStarted","Data":"d6401d0544574d65cb4bab4bbae261060efda6e8bba51889a78ee285e21286a2"} Jan 29 08:11:40 crc kubenswrapper[5017]: I0129 08:11:40.937344 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9jn9h" event={"ID":"0389f7bb-3e21-4689-8189-71761db6d516","Type":"ContainerStarted","Data":"5717fff4936a3c655385c781207f00d1e1d3de21fb989c6ec86d4e56987627d7"} Jan 29 08:11:40 crc kubenswrapper[5017]: I0129 08:11:40.949041 5017 generic.go:334] "Generic (PLEG): container finished" podID="f2480eb1-510d-4a5a-b350-25d72d600e03" containerID="d1fa96f677f5701c04d5462894cbb0ac0be22effe05a00c5f1ba27a0bbd77eec" exitCode=0 Jan 29 08:11:40 crc kubenswrapper[5017]: I0129 08:11:40.949152 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" event={"ID":"f2480eb1-510d-4a5a-b350-25d72d600e03","Type":"ContainerDied","Data":"d1fa96f677f5701c04d5462894cbb0ac0be22effe05a00c5f1ba27a0bbd77eec"} Jan 29 08:11:40 crc kubenswrapper[5017]: I0129 08:11:40.955711 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:40 crc kubenswrapper[5017]: I0129 08:11:40.956422 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-kn7gl" event={"ID":"e3c0438e-2e9a-44d8-ac24-805d286c6256","Type":"ContainerStarted","Data":"dce3b5a9002cbc513802aedab4d7fddebed0b466c2c5e8f852b26fabe2fe2a69"} Jan 29 08:11:41 crc kubenswrapper[5017]: I0129 08:11:41.036171 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-kn7gl" podStartSLOduration=10.036148392 podStartE2EDuration="10.036148392s" podCreationTimestamp="2026-01-29 08:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:11:41.021310166 +0000 UTC m=+5787.395757776" watchObservedRunningTime="2026-01-29 08:11:41.036148392 +0000 UTC m=+5787.410596002" Jan 29 08:11:41 crc kubenswrapper[5017]: I0129 08:11:41.693594 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-rnfsn" Jan 29 08:11:42 crc kubenswrapper[5017]: I0129 08:11:42.322395 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:11:42 crc kubenswrapper[5017]: E0129 08:11:42.322611 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:11:43 crc kubenswrapper[5017]: I0129 08:11:43.985475 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" event={"ID":"f2480eb1-510d-4a5a-b350-25d72d600e03","Type":"ContainerStarted","Data":"693c136ebab65b99dbbf4a426932894ea584ae18f9eacf43afb80bb64cae65fe"} Jan 29 08:11:44 crc kubenswrapper[5017]: I0129 08:11:44.999243 5017 generic.go:334] "Generic (PLEG): container finished" podID="0389f7bb-3e21-4689-8189-71761db6d516" containerID="5717fff4936a3c655385c781207f00d1e1d3de21fb989c6ec86d4e56987627d7" exitCode=0 Jan 29 08:11:44 crc kubenswrapper[5017]: I0129 08:11:44.999388 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9jn9h" event={"ID":"0389f7bb-3e21-4689-8189-71761db6d516","Type":"ContainerDied","Data":"5717fff4936a3c655385c781207f00d1e1d3de21fb989c6ec86d4e56987627d7"} Jan 29 08:11:45 crc kubenswrapper[5017]: I0129 08:11:45.029240 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" podStartSLOduration=7.147601388 podStartE2EDuration="19.02920669s" podCreationTimestamp="2026-01-29 08:11:26 +0000 UTC" firstStartedPulling="2026-01-29 08:11:27.821578227 +0000 UTC m=+5774.196025827" lastFinishedPulling="2026-01-29 08:11:39.703183519 +0000 UTC m=+5786.077631129" observedRunningTime="2026-01-29 08:11:45.019274761 +0000 UTC m=+5791.393722371" watchObservedRunningTime="2026-01-29 08:11:45.02920669 +0000 UTC m=+5791.403654300" Jan 29 08:11:46 crc kubenswrapper[5017]: I0129 08:11:46.012814 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-k6hjh" event={"ID":"36eed554-1fc4-4d35-a541-9a46e00e727d","Type":"ContainerStarted","Data":"a5cd5d08a822952a6389b5636163aa04fa874a603e949533046d3531c2e24789"} Jan 29 08:11:46 crc kubenswrapper[5017]: I0129 08:11:46.015369 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9jn9h" event={"ID":"0389f7bb-3e21-4689-8189-71761db6d516","Type":"ContainerStarted","Data":"cb856a21fd5e9b81b132254c0bc4364f1f4c8e0115eca236e08072b88e1b5703"} Jan 29 08:11:46 crc kubenswrapper[5017]: I0129 08:11:46.016548 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:11:46 crc kubenswrapper[5017]: I0129 08:11:46.068061 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-9jn9h" podStartSLOduration=8.163733366 podStartE2EDuration="14.06802109s" podCreationTimestamp="2026-01-29 08:11:32 +0000 UTC" firstStartedPulling="2026-01-29 08:11:33.805382061 +0000 UTC m=+5780.179829671" lastFinishedPulling="2026-01-29 08:11:39.709669785 +0000 UTC m=+5786.084117395" observedRunningTime="2026-01-29 08:11:46.06382907 +0000 UTC m=+5792.438276680" watchObservedRunningTime="2026-01-29 08:11:46.06802109 +0000 UTC m=+5792.442468700" Jan 29 08:11:46 crc kubenswrapper[5017]: I0129 08:11:46.683236 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-kn7gl" Jan 29 08:11:47 crc kubenswrapper[5017]: I0129 08:11:47.034085 5017 generic.go:334] "Generic (PLEG): container finished" podID="36eed554-1fc4-4d35-a541-9a46e00e727d" containerID="a5cd5d08a822952a6389b5636163aa04fa874a603e949533046d3531c2e24789" exitCode=0 Jan 29 08:11:47 crc kubenswrapper[5017]: I0129 08:11:47.034202 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-k6hjh" event={"ID":"36eed554-1fc4-4d35-a541-9a46e00e727d","Type":"ContainerDied","Data":"a5cd5d08a822952a6389b5636163aa04fa874a603e949533046d3531c2e24789"} Jan 29 08:11:48 crc kubenswrapper[5017]: I0129 08:11:48.065452 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-k6hjh" event={"ID":"36eed554-1fc4-4d35-a541-9a46e00e727d","Type":"ContainerStarted","Data":"0dd9f3f43d7c00cb8d8422b944e34c2dbae494808bbb7154b5f5b07cfe203f13"} Jan 29 08:11:48 crc kubenswrapper[5017]: I0129 08:11:48.065928 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-k6hjh" Jan 29 08:11:48 crc kubenswrapper[5017]: I0129 08:11:48.097079 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-k6hjh" podStartSLOduration=7.872265575 podStartE2EDuration="13.097051434s" podCreationTimestamp="2026-01-29 08:11:35 +0000 UTC" firstStartedPulling="2026-01-29 08:11:39.611088109 +0000 UTC m=+5785.985535719" lastFinishedPulling="2026-01-29 08:11:44.835873968 +0000 UTC m=+5791.210321578" observedRunningTime="2026-01-29 08:11:48.092621658 +0000 UTC m=+5794.467069268" watchObservedRunningTime="2026-01-29 08:11:48.097051434 +0000 UTC m=+5794.471499044" Jan 29 08:11:54 crc kubenswrapper[5017]: I0129 08:11:54.326550 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:11:54 crc kubenswrapper[5017]: E0129 08:11:54.327770 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:12:02 crc kubenswrapper[5017]: I0129 08:12:02.903922 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-9jn9h" Jan 29 08:12:05 crc kubenswrapper[5017]: I0129 08:12:05.934391 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-k6hjh" Jan 29 08:12:06 crc kubenswrapper[5017]: I0129 08:12:06.316608 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:12:06 crc kubenswrapper[5017]: E0129 08:12:06.316890 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:12:07 crc kubenswrapper[5017]: I0129 08:12:07.713187 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-q6znx"] Jan 29 08:12:07 crc kubenswrapper[5017]: I0129 08:12:07.714149 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" podUID="f2480eb1-510d-4a5a-b350-25d72d600e03" containerName="octavia-amphora-httpd" containerID="cri-o://693c136ebab65b99dbbf4a426932894ea584ae18f9eacf43afb80bb64cae65fe" gracePeriod=30 Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.264834 5017 generic.go:334] "Generic (PLEG): container finished" podID="f2480eb1-510d-4a5a-b350-25d72d600e03" containerID="693c136ebab65b99dbbf4a426932894ea584ae18f9eacf43afb80bb64cae65fe" exitCode=0 Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.264997 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" event={"ID":"f2480eb1-510d-4a5a-b350-25d72d600e03","Type":"ContainerDied","Data":"693c136ebab65b99dbbf4a426932894ea584ae18f9eacf43afb80bb64cae65fe"} Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.265420 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" event={"ID":"f2480eb1-510d-4a5a-b350-25d72d600e03","Type":"ContainerDied","Data":"e606f69ac414eea208536dbab6505a0f960297a0abd5b5190d9dbbd735225c4a"} Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.265445 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e606f69ac414eea208536dbab6505a0f960297a0abd5b5190d9dbbd735225c4a" Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.308400 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.486798 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2480eb1-510d-4a5a-b350-25d72d600e03-httpd-config\") pod \"f2480eb1-510d-4a5a-b350-25d72d600e03\" (UID: \"f2480eb1-510d-4a5a-b350-25d72d600e03\") " Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.487046 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2480eb1-510d-4a5a-b350-25d72d600e03-amphora-image\") pod \"f2480eb1-510d-4a5a-b350-25d72d600e03\" (UID: \"f2480eb1-510d-4a5a-b350-25d72d600e03\") " Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.521802 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2480eb1-510d-4a5a-b350-25d72d600e03-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f2480eb1-510d-4a5a-b350-25d72d600e03" (UID: "f2480eb1-510d-4a5a-b350-25d72d600e03"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.577546 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2480eb1-510d-4a5a-b350-25d72d600e03-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "f2480eb1-510d-4a5a-b350-25d72d600e03" (UID: "f2480eb1-510d-4a5a-b350-25d72d600e03"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.591075 5017 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2480eb1-510d-4a5a-b350-25d72d600e03-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:08 crc kubenswrapper[5017]: I0129 08:12:08.591111 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2480eb1-510d-4a5a-b350-25d72d600e03-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:09 crc kubenswrapper[5017]: I0129 08:12:09.275045 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-q6znx" Jan 29 08:12:09 crc kubenswrapper[5017]: I0129 08:12:09.321598 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-q6znx"] Jan 29 08:12:09 crc kubenswrapper[5017]: I0129 08:12:09.331816 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-q6znx"] Jan 29 08:12:10 crc kubenswrapper[5017]: I0129 08:12:10.327816 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2480eb1-510d-4a5a-b350-25d72d600e03" path="/var/lib/kubelet/pods/f2480eb1-510d-4a5a-b350-25d72d600e03/volumes" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.982154 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c45fcbd9c-jkwb8"] Jan 29 08:12:11 crc kubenswrapper[5017]: E0129 08:12:11.983216 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abeeca4-8ab6-41ad-9aeb-9c00d087db86" containerName="init" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.983231 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abeeca4-8ab6-41ad-9aeb-9c00d087db86" containerName="init" Jan 29 08:12:11 crc kubenswrapper[5017]: E0129 08:12:11.983251 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abeeca4-8ab6-41ad-9aeb-9c00d087db86" containerName="octavia-db-sync" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.983257 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abeeca4-8ab6-41ad-9aeb-9c00d087db86" containerName="octavia-db-sync" Jan 29 08:12:11 crc kubenswrapper[5017]: E0129 08:12:11.983276 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2480eb1-510d-4a5a-b350-25d72d600e03" containerName="octavia-amphora-httpd" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.983285 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2480eb1-510d-4a5a-b350-25d72d600e03" containerName="octavia-amphora-httpd" Jan 29 08:12:11 crc kubenswrapper[5017]: E0129 08:12:11.983299 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2480eb1-510d-4a5a-b350-25d72d600e03" containerName="init" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.983305 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2480eb1-510d-4a5a-b350-25d72d600e03" containerName="init" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.983738 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2480eb1-510d-4a5a-b350-25d72d600e03" containerName="octavia-amphora-httpd" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.983755 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abeeca4-8ab6-41ad-9aeb-9c00d087db86" containerName="octavia-db-sync" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.985222 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.992279 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.992485 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.993623 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 29 08:12:11 crc kubenswrapper[5017]: I0129 08:12:11.994346 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-g4vn5" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.000332 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c45fcbd9c-jkwb8"] Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.019536 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.019912 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerName="glance-log" containerID="cri-o://c9ff0ee3329df41941bcea7c2383ccb1819a04fd5d49040732f48194c10e5e80" gracePeriod=30 Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.020587 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerName="glance-httpd" containerID="cri-o://8b6cbcde06dbdad04287b85fb2bd8ca25cdf080f9864141836bb63fa032ef620" gracePeriod=30 Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.076036 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67957468c7-bwnjw"] Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.078002 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.103988 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67957468c7-bwnjw"] Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.116935 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.117651 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerName="glance-httpd" containerID="cri-o://208feea7a15c43c3b401c6c2979853e03b1d20050ae08cd5bbcde48673fcf10e" gracePeriod=30 Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.117677 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerName="glance-log" containerID="cri-o://a6a970e35eb8877153f15baa7bb62816a72fe62277d358f01aa92447687f1807" gracePeriod=30 Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.168520 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3c535b30-9dff-483b-a2d4-2c278dfba773-horizon-secret-key\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.169039 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-config-data\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.169095 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c535b30-9dff-483b-a2d4-2c278dfba773-logs\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.169132 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fxh\" (UniqueName: \"kubernetes.io/projected/4075638f-fbb3-488a-83b6-a7a0321ca8ff-kube-api-access-q2fxh\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.169184 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4075638f-fbb3-488a-83b6-a7a0321ca8ff-logs\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.169208 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-scripts\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.169316 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-config-data\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.169430 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-scripts\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.169458 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4075638f-fbb3-488a-83b6-a7a0321ca8ff-horizon-secret-key\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.169477 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p745n\" (UniqueName: \"kubernetes.io/projected/3c535b30-9dff-483b-a2d4-2c278dfba773-kube-api-access-p745n\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272118 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-config-data\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272210 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-scripts\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272242 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4075638f-fbb3-488a-83b6-a7a0321ca8ff-horizon-secret-key\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272265 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p745n\" (UniqueName: \"kubernetes.io/projected/3c535b30-9dff-483b-a2d4-2c278dfba773-kube-api-access-p745n\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272304 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3c535b30-9dff-483b-a2d4-2c278dfba773-horizon-secret-key\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272333 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-config-data\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272377 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c535b30-9dff-483b-a2d4-2c278dfba773-logs\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272402 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fxh\" (UniqueName: \"kubernetes.io/projected/4075638f-fbb3-488a-83b6-a7a0321ca8ff-kube-api-access-q2fxh\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272459 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4075638f-fbb3-488a-83b6-a7a0321ca8ff-logs\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.272481 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-scripts\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.273499 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4075638f-fbb3-488a-83b6-a7a0321ca8ff-logs\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.273757 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c535b30-9dff-483b-a2d4-2c278dfba773-logs\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.273890 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-scripts\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.274265 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-scripts\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.274714 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-config-data\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.275170 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-config-data\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.278741 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4075638f-fbb3-488a-83b6-a7a0321ca8ff-horizon-secret-key\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.278822 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3c535b30-9dff-483b-a2d4-2c278dfba773-horizon-secret-key\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.293000 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fxh\" (UniqueName: \"kubernetes.io/projected/4075638f-fbb3-488a-83b6-a7a0321ca8ff-kube-api-access-q2fxh\") pod \"horizon-c45fcbd9c-jkwb8\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.293218 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p745n\" (UniqueName: \"kubernetes.io/projected/3c535b30-9dff-483b-a2d4-2c278dfba773-kube-api-access-p745n\") pod \"horizon-67957468c7-bwnjw\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.313382 5017 generic.go:334] "Generic (PLEG): container finished" podID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerID="a6a970e35eb8877153f15baa7bb62816a72fe62277d358f01aa92447687f1807" exitCode=143 Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.313490 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4","Type":"ContainerDied","Data":"a6a970e35eb8877153f15baa7bb62816a72fe62277d358f01aa92447687f1807"} Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.322421 5017 generic.go:334] "Generic (PLEG): container finished" podID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerID="c9ff0ee3329df41941bcea7c2383ccb1819a04fd5d49040732f48194c10e5e80" exitCode=143 Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.328730 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.335745 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd04424-aea6-47bb-b4bf-833ab5c9ea57","Type":"ContainerDied","Data":"c9ff0ee3329df41941bcea7c2383ccb1819a04fd5d49040732f48194c10e5e80"} Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.415198 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.733387 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c45fcbd9c-jkwb8"] Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.784190 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68bd76c56f-gz2sk"] Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.787528 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.818950 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68bd76c56f-gz2sk"] Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.835285 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c45fcbd9c-jkwb8"] Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.898556 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7d0aff-5a86-446e-a1d8-228c27e71a18-logs\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.899032 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd7d0aff-5a86-446e-a1d8-228c27e71a18-horizon-secret-key\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.899083 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-config-data\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.899135 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-scripts\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:12 crc kubenswrapper[5017]: I0129 08:12:12.899183 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xfst\" (UniqueName: \"kubernetes.io/projected/bd7d0aff-5a86-446e-a1d8-228c27e71a18-kube-api-access-7xfst\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.001561 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-config-data\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.001655 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-scripts\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.001710 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xfst\" (UniqueName: \"kubernetes.io/projected/bd7d0aff-5a86-446e-a1d8-228c27e71a18-kube-api-access-7xfst\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.001782 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7d0aff-5a86-446e-a1d8-228c27e71a18-logs\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.001824 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd7d0aff-5a86-446e-a1d8-228c27e71a18-horizon-secret-key\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.003639 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-config-data\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.004035 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-scripts\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.004217 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7d0aff-5a86-446e-a1d8-228c27e71a18-logs\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.009639 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd7d0aff-5a86-446e-a1d8-228c27e71a18-horizon-secret-key\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.026766 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67957468c7-bwnjw"] Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.032325 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xfst\" (UniqueName: \"kubernetes.io/projected/bd7d0aff-5a86-446e-a1d8-228c27e71a18-kube-api-access-7xfst\") pod \"horizon-68bd76c56f-gz2sk\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.128195 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.347320 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67957468c7-bwnjw" event={"ID":"3c535b30-9dff-483b-a2d4-2c278dfba773","Type":"ContainerStarted","Data":"656e1bee6df2bacb6e7f8652552e6b0191f209e44f2c1bac01a0cfae4c46e492"} Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.349969 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c45fcbd9c-jkwb8" event={"ID":"4075638f-fbb3-488a-83b6-a7a0321ca8ff","Type":"ContainerStarted","Data":"16cb74d44120a8110c037e309a11d7d1b327db1dd1277f50d463869210f20cae"} Jan 29 08:12:13 crc kubenswrapper[5017]: I0129 08:12:13.606120 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68bd76c56f-gz2sk"] Jan 29 08:12:13 crc kubenswrapper[5017]: W0129 08:12:13.621382 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd7d0aff_5a86_446e_a1d8_228c27e71a18.slice/crio-43ab365bcc697de397dd193d8c0d92b6a6d7e1979b3278cc503f0e516ea54fb2 WatchSource:0}: Error finding container 43ab365bcc697de397dd193d8c0d92b6a6d7e1979b3278cc503f0e516ea54fb2: Status 404 returned error can't find the container with id 43ab365bcc697de397dd193d8c0d92b6a6d7e1979b3278cc503f0e516ea54fb2 Jan 29 08:12:14 crc kubenswrapper[5017]: I0129 08:12:14.368192 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bd76c56f-gz2sk" event={"ID":"bd7d0aff-5a86-446e-a1d8-228c27e71a18","Type":"ContainerStarted","Data":"43ab365bcc697de397dd193d8c0d92b6a6d7e1979b3278cc503f0e516ea54fb2"} Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.388550 5017 generic.go:334] "Generic (PLEG): container finished" podID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerID="8b6cbcde06dbdad04287b85fb2bd8ca25cdf080f9864141836bb63fa032ef620" exitCode=0 Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.388640 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd04424-aea6-47bb-b4bf-833ab5c9ea57","Type":"ContainerDied","Data":"8b6cbcde06dbdad04287b85fb2bd8ca25cdf080f9864141836bb63fa032ef620"} Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.393063 5017 generic.go:334] "Generic (PLEG): container finished" podID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerID="208feea7a15c43c3b401c6c2979853e03b1d20050ae08cd5bbcde48673fcf10e" exitCode=0 Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.393126 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4","Type":"ContainerDied","Data":"208feea7a15c43c3b401c6c2979853e03b1d20050ae08cd5bbcde48673fcf10e"} Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.823211 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.888891 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-combined-ca-bundle\") pod \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.888937 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-logs\") pod \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.888998 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-httpd-run\") pod \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.889024 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-scripts\") pod \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.889054 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-config-data\") pod \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.889111 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwmzx\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-kube-api-access-pwmzx\") pod \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.889142 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-ceph\") pod \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\" (UID: \"8fd04424-aea6-47bb-b4bf-833ab5c9ea57\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.890116 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8fd04424-aea6-47bb-b4bf-833ab5c9ea57" (UID: "8fd04424-aea6-47bb-b4bf-833ab5c9ea57"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.890337 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-logs" (OuterVolumeSpecName: "logs") pod "8fd04424-aea6-47bb-b4bf-833ab5c9ea57" (UID: "8fd04424-aea6-47bb-b4bf-833ab5c9ea57"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.891923 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.891947 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.897256 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-kube-api-access-pwmzx" (OuterVolumeSpecName: "kube-api-access-pwmzx") pod "8fd04424-aea6-47bb-b4bf-833ab5c9ea57" (UID: "8fd04424-aea6-47bb-b4bf-833ab5c9ea57"). InnerVolumeSpecName "kube-api-access-pwmzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.897383 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-scripts" (OuterVolumeSpecName: "scripts") pod "8fd04424-aea6-47bb-b4bf-833ab5c9ea57" (UID: "8fd04424-aea6-47bb-b4bf-833ab5c9ea57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.899208 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-ceph" (OuterVolumeSpecName: "ceph") pod "8fd04424-aea6-47bb-b4bf-833ab5c9ea57" (UID: "8fd04424-aea6-47bb-b4bf-833ab5c9ea57"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.917750 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.938187 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fd04424-aea6-47bb-b4bf-833ab5c9ea57" (UID: "8fd04424-aea6-47bb-b4bf-833ab5c9ea57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.965716 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-config-data" (OuterVolumeSpecName: "config-data") pod "8fd04424-aea6-47bb-b4bf-833ab5c9ea57" (UID: "8fd04424-aea6-47bb-b4bf-833ab5c9ea57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.994491 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-ceph\") pod \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.994765 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-httpd-run\") pod \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.994981 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-logs\") pod \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.995032 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-scripts\") pod \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.995080 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-combined-ca-bundle\") pod \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.995142 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-config-data\") pod \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.995254 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdvgx\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-kube-api-access-mdvgx\") pod \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\" (UID: \"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4\") " Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.995751 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.995766 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.995777 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.995787 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwmzx\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-kube-api-access-pwmzx\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.995799 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8fd04424-aea6-47bb-b4bf-833ab5c9ea57-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.997225 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" (UID: "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:15 crc kubenswrapper[5017]: I0129 08:12:15.997238 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-logs" (OuterVolumeSpecName: "logs") pod "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" (UID: "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.000132 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-ceph" (OuterVolumeSpecName: "ceph") pod "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" (UID: "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.003654 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-scripts" (OuterVolumeSpecName: "scripts") pod "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" (UID: "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.006325 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-kube-api-access-mdvgx" (OuterVolumeSpecName: "kube-api-access-mdvgx") pod "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" (UID: "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4"). InnerVolumeSpecName "kube-api-access-mdvgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.044706 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" (UID: "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.050230 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-config-data" (OuterVolumeSpecName: "config-data") pod "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" (UID: "35a7800b-fba3-4d8a-99d3-c8bc57cf37b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.097519 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.097565 5017 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.097576 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.097586 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.097596 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.097608 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.097621 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdvgx\" (UniqueName: \"kubernetes.io/projected/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4-kube-api-access-mdvgx\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.410511 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.410552 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35a7800b-fba3-4d8a-99d3-c8bc57cf37b4","Type":"ContainerDied","Data":"c168e409a143e93b33beb7b59e4a8c9d96c335d8bfc5b9e6815832048263250e"} Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.410713 5017 scope.go:117] "RemoveContainer" containerID="208feea7a15c43c3b401c6c2979853e03b1d20050ae08cd5bbcde48673fcf10e" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.415811 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fd04424-aea6-47bb-b4bf-833ab5c9ea57","Type":"ContainerDied","Data":"97f3d8e4960cdaf0522fd3bcf4d0e2915715a355746d8f2f6b9dfe75919e0fe4"} Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.415896 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.458149 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.472459 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.492041 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:12:16 crc kubenswrapper[5017]: E0129 08:12:16.492618 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerName="glance-httpd" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.492643 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerName="glance-httpd" Jan 29 08:12:16 crc kubenswrapper[5017]: E0129 08:12:16.492664 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerName="glance-log" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.492671 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerName="glance-log" Jan 29 08:12:16 crc kubenswrapper[5017]: E0129 08:12:16.492685 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerName="glance-log" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.492692 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerName="glance-log" Jan 29 08:12:16 crc kubenswrapper[5017]: E0129 08:12:16.492700 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerName="glance-httpd" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.492708 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerName="glance-httpd" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.492899 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerName="glance-httpd" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.492918 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" containerName="glance-log" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.492931 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerName="glance-httpd" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.492945 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" containerName="glance-log" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.494104 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.503630 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-k9zb7" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.503907 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.504088 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.505257 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqqp\" (UniqueName: \"kubernetes.io/projected/1b16f455-e4ba-484f-96fc-78de5180d8c5-kube-api-access-qdqqp\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.505321 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b16f455-e4ba-484f-96fc-78de5180d8c5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.505357 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b16f455-e4ba-484f-96fc-78de5180d8c5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.505377 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1b16f455-e4ba-484f-96fc-78de5180d8c5-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.505457 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b16f455-e4ba-484f-96fc-78de5180d8c5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.505476 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b16f455-e4ba-484f-96fc-78de5180d8c5-logs\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.505514 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b16f455-e4ba-484f-96fc-78de5180d8c5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.516765 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.531307 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.557210 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.572541 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.575378 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.577979 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.586329 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.606814 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b16f455-e4ba-484f-96fc-78de5180d8c5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.606877 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6707e9d7-0585-4491-8e72-6203f49f9e14-logs\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.606927 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6707e9d7-0585-4491-8e72-6203f49f9e14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.606954 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6707e9d7-0585-4491-8e72-6203f49f9e14-ceph\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.606989 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdqqp\" (UniqueName: \"kubernetes.io/projected/1b16f455-e4ba-484f-96fc-78de5180d8c5-kube-api-access-qdqqp\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.607015 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b16f455-e4ba-484f-96fc-78de5180d8c5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.607046 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b16f455-e4ba-484f-96fc-78de5180d8c5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.607064 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1b16f455-e4ba-484f-96fc-78de5180d8c5-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.607108 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6707e9d7-0585-4491-8e72-6203f49f9e14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.607139 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnq79\" (UniqueName: \"kubernetes.io/projected/6707e9d7-0585-4491-8e72-6203f49f9e14-kube-api-access-dnq79\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.607187 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b16f455-e4ba-484f-96fc-78de5180d8c5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.607204 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b16f455-e4ba-484f-96fc-78de5180d8c5-logs\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.607232 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6707e9d7-0585-4491-8e72-6203f49f9e14-scripts\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.607252 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6707e9d7-0585-4491-8e72-6203f49f9e14-config-data\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.626566 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b16f455-e4ba-484f-96fc-78de5180d8c5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.626926 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b16f455-e4ba-484f-96fc-78de5180d8c5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.630042 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b16f455-e4ba-484f-96fc-78de5180d8c5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.630376 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b16f455-e4ba-484f-96fc-78de5180d8c5-logs\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.631097 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b16f455-e4ba-484f-96fc-78de5180d8c5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.631844 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1b16f455-e4ba-484f-96fc-78de5180d8c5-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.661010 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdqqp\" (UniqueName: \"kubernetes.io/projected/1b16f455-e4ba-484f-96fc-78de5180d8c5-kube-api-access-qdqqp\") pod \"glance-default-internal-api-0\" (UID: \"1b16f455-e4ba-484f-96fc-78de5180d8c5\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.709998 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6707e9d7-0585-4491-8e72-6203f49f9e14-scripts\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.710422 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6707e9d7-0585-4491-8e72-6203f49f9e14-config-data\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.710475 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6707e9d7-0585-4491-8e72-6203f49f9e14-logs\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.710537 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6707e9d7-0585-4491-8e72-6203f49f9e14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.710572 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6707e9d7-0585-4491-8e72-6203f49f9e14-ceph\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.710656 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6707e9d7-0585-4491-8e72-6203f49f9e14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.710692 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnq79\" (UniqueName: \"kubernetes.io/projected/6707e9d7-0585-4491-8e72-6203f49f9e14-kube-api-access-dnq79\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.711422 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6707e9d7-0585-4491-8e72-6203f49f9e14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.711807 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6707e9d7-0585-4491-8e72-6203f49f9e14-logs\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.715529 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6707e9d7-0585-4491-8e72-6203f49f9e14-scripts\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.716473 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6707e9d7-0585-4491-8e72-6203f49f9e14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.718034 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6707e9d7-0585-4491-8e72-6203f49f9e14-config-data\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.718690 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6707e9d7-0585-4491-8e72-6203f49f9e14-ceph\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.731718 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnq79\" (UniqueName: \"kubernetes.io/projected/6707e9d7-0585-4491-8e72-6203f49f9e14-kube-api-access-dnq79\") pod \"glance-default-external-api-0\" (UID: \"6707e9d7-0585-4491-8e72-6203f49f9e14\") " pod="openstack/glance-default-external-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.819658 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:16 crc kubenswrapper[5017]: I0129 08:12:16.900743 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:12:18 crc kubenswrapper[5017]: I0129 08:12:18.329361 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a7800b-fba3-4d8a-99d3-c8bc57cf37b4" path="/var/lib/kubelet/pods/35a7800b-fba3-4d8a-99d3-c8bc57cf37b4/volumes" Jan 29 08:12:18 crc kubenswrapper[5017]: I0129 08:12:18.330657 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd04424-aea6-47bb-b4bf-833ab5c9ea57" path="/var/lib/kubelet/pods/8fd04424-aea6-47bb-b4bf-833ab5c9ea57/volumes" Jan 29 08:12:20 crc kubenswrapper[5017]: I0129 08:12:20.854088 5017 scope.go:117] "RemoveContainer" containerID="a6a970e35eb8877153f15baa7bb62816a72fe62277d358f01aa92447687f1807" Jan 29 08:12:20 crc kubenswrapper[5017]: I0129 08:12:20.965809 5017 scope.go:117] "RemoveContainer" containerID="8b6cbcde06dbdad04287b85fb2bd8ca25cdf080f9864141836bb63fa032ef620" Jan 29 08:12:21 crc kubenswrapper[5017]: I0129 08:12:21.129632 5017 scope.go:117] "RemoveContainer" containerID="c9ff0ee3329df41941bcea7c2383ccb1819a04fd5d49040732f48194c10e5e80" Jan 29 08:12:21 crc kubenswrapper[5017]: I0129 08:12:21.316506 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:12:21 crc kubenswrapper[5017]: E0129 08:12:21.321104 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:12:21 crc kubenswrapper[5017]: I0129 08:12:21.479626 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c45fcbd9c-jkwb8" event={"ID":"4075638f-fbb3-488a-83b6-a7a0321ca8ff","Type":"ContainerStarted","Data":"c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2"} Jan 29 08:12:21 crc kubenswrapper[5017]: I0129 08:12:21.481334 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bd76c56f-gz2sk" event={"ID":"bd7d0aff-5a86-446e-a1d8-228c27e71a18","Type":"ContainerStarted","Data":"6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6"} Jan 29 08:12:21 crc kubenswrapper[5017]: I0129 08:12:21.482597 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67957468c7-bwnjw" event={"ID":"3c535b30-9dff-483b-a2d4-2c278dfba773","Type":"ContainerStarted","Data":"3aad72d811e64531c6b825799f95e70e7b6fb7500ea341e54c9048a2b7cf40a5"} Jan 29 08:12:21 crc kubenswrapper[5017]: I0129 08:12:21.483452 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:12:21 crc kubenswrapper[5017]: I0129 08:12:21.636862 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.518040 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6707e9d7-0585-4491-8e72-6203f49f9e14","Type":"ContainerStarted","Data":"573c828fd519854bff904b48840968eb96c97610cc2f2259a8363ad45cd95f12"} Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.518628 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6707e9d7-0585-4491-8e72-6203f49f9e14","Type":"ContainerStarted","Data":"d925cf133e6227e9858a67689540d38bbf296afc632468f1d463cafdf033b1b6"} Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.522123 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c45fcbd9c-jkwb8" event={"ID":"4075638f-fbb3-488a-83b6-a7a0321ca8ff","Type":"ContainerStarted","Data":"e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4"} Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.522282 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c45fcbd9c-jkwb8" podUID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerName="horizon-log" containerID="cri-o://c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2" gracePeriod=30 Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.524516 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c45fcbd9c-jkwb8" podUID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerName="horizon" containerID="cri-o://e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4" gracePeriod=30 Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.534018 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bd76c56f-gz2sk" event={"ID":"bd7d0aff-5a86-446e-a1d8-228c27e71a18","Type":"ContainerStarted","Data":"dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66"} Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.543203 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c45fcbd9c-jkwb8" podStartSLOduration=3.388793104 podStartE2EDuration="11.54317765s" podCreationTimestamp="2026-01-29 08:12:11 +0000 UTC" firstStartedPulling="2026-01-29 08:12:12.840694467 +0000 UTC m=+5819.215142077" lastFinishedPulling="2026-01-29 08:12:20.995079013 +0000 UTC m=+5827.369526623" observedRunningTime="2026-01-29 08:12:22.541267844 +0000 UTC m=+5828.915715454" watchObservedRunningTime="2026-01-29 08:12:22.54317765 +0000 UTC m=+5828.917625270" Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.548104 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67957468c7-bwnjw" event={"ID":"3c535b30-9dff-483b-a2d4-2c278dfba773","Type":"ContainerStarted","Data":"0fdfa3c17a6578b759ae683b8d7702bd36abcf726fc7b1de9257c8e0c8873dc8"} Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.560868 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b16f455-e4ba-484f-96fc-78de5180d8c5","Type":"ContainerStarted","Data":"9985aa25eba47bf38ee123eb8b56f11d8dbba50ece2da4ffe3362e65b9e98c22"} Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.561452 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b16f455-e4ba-484f-96fc-78de5180d8c5","Type":"ContainerStarted","Data":"0316e2eda1b50ff3c17516f69c15ca65bcb51cc1d4100e2e13169c1b28174768"} Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.568813 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68bd76c56f-gz2sk" podStartSLOduration=3.188150986 podStartE2EDuration="10.568793576s" podCreationTimestamp="2026-01-29 08:12:12 +0000 UTC" firstStartedPulling="2026-01-29 08:12:13.624495504 +0000 UTC m=+5819.998943114" lastFinishedPulling="2026-01-29 08:12:21.005138094 +0000 UTC m=+5827.379585704" observedRunningTime="2026-01-29 08:12:22.560365683 +0000 UTC m=+5828.934813293" watchObservedRunningTime="2026-01-29 08:12:22.568793576 +0000 UTC m=+5828.943241186" Jan 29 08:12:22 crc kubenswrapper[5017]: I0129 08:12:22.618105 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67957468c7-bwnjw" podStartSLOduration=2.612027115 podStartE2EDuration="10.618069719s" podCreationTimestamp="2026-01-29 08:12:12 +0000 UTC" firstStartedPulling="2026-01-29 08:12:13.025617017 +0000 UTC m=+5819.400064627" lastFinishedPulling="2026-01-29 08:12:21.031659621 +0000 UTC m=+5827.406107231" observedRunningTime="2026-01-29 08:12:22.604747208 +0000 UTC m=+5828.979194818" watchObservedRunningTime="2026-01-29 08:12:22.618069719 +0000 UTC m=+5828.992517339" Jan 29 08:12:23 crc kubenswrapper[5017]: I0129 08:12:23.128704 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:23 crc kubenswrapper[5017]: I0129 08:12:23.129185 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:23 crc kubenswrapper[5017]: I0129 08:12:23.585772 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b16f455-e4ba-484f-96fc-78de5180d8c5","Type":"ContainerStarted","Data":"3ee340357fe0e00266814a773e44e1db238a27ea276a434eebed4e6c3a63558b"} Jan 29 08:12:23 crc kubenswrapper[5017]: I0129 08:12:23.590288 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6707e9d7-0585-4491-8e72-6203f49f9e14","Type":"ContainerStarted","Data":"b18ad7cbf33eb2152f0f8e8e84d47224a5a4be6afaa882990fc0cc25a69e734a"} Jan 29 08:12:23 crc kubenswrapper[5017]: I0129 08:12:23.633123 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.633095428 podStartE2EDuration="7.633095428s" podCreationTimestamp="2026-01-29 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:12:23.611849028 +0000 UTC m=+5829.986296638" watchObservedRunningTime="2026-01-29 08:12:23.633095428 +0000 UTC m=+5830.007543048" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.303877 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.30385168 podStartE2EDuration="9.30385168s" podCreationTimestamp="2026-01-29 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:12:23.644042901 +0000 UTC m=+5830.018490521" watchObservedRunningTime="2026-01-29 08:12:25.30385168 +0000 UTC m=+5831.678299290" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.323539 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hll7k"] Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.327220 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.367831 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hll7k"] Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.418287 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-catalog-content\") pod \"redhat-marketplace-hll7k\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.418409 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-utilities\") pod \"redhat-marketplace-hll7k\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.418448 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf2fm\" (UniqueName: \"kubernetes.io/projected/24ca248f-ec66-4853-925c-0bba88b0d7d4-kube-api-access-rf2fm\") pod \"redhat-marketplace-hll7k\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.521126 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-utilities\") pod \"redhat-marketplace-hll7k\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.521336 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf2fm\" (UniqueName: \"kubernetes.io/projected/24ca248f-ec66-4853-925c-0bba88b0d7d4-kube-api-access-rf2fm\") pod \"redhat-marketplace-hll7k\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.521555 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-catalog-content\") pod \"redhat-marketplace-hll7k\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.521969 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-utilities\") pod \"redhat-marketplace-hll7k\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.522195 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-catalog-content\") pod \"redhat-marketplace-hll7k\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.549224 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf2fm\" (UniqueName: \"kubernetes.io/projected/24ca248f-ec66-4853-925c-0bba88b0d7d4-kube-api-access-rf2fm\") pod \"redhat-marketplace-hll7k\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:25 crc kubenswrapper[5017]: I0129 08:12:25.675241 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.304799 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hll7k"] Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.649195 5017 generic.go:334] "Generic (PLEG): container finished" podID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerID="a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca" exitCode=0 Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.649316 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hll7k" event={"ID":"24ca248f-ec66-4853-925c-0bba88b0d7d4","Type":"ContainerDied","Data":"a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca"} Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.649570 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hll7k" event={"ID":"24ca248f-ec66-4853-925c-0bba88b0d7d4","Type":"ContainerStarted","Data":"7e4dc0cce7d1e5e2de284467e3cddddd4ff55a1d546f2179c10e81595a263bb9"} Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.819882 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.820386 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.856798 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.869663 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.901635 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.902019 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.943860 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 08:12:26 crc kubenswrapper[5017]: I0129 08:12:26.953353 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 08:12:27 crc kubenswrapper[5017]: I0129 08:12:27.665246 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:27 crc kubenswrapper[5017]: I0129 08:12:27.665331 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 08:12:27 crc kubenswrapper[5017]: I0129 08:12:27.665351 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:27 crc kubenswrapper[5017]: I0129 08:12:27.665363 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 08:12:28 crc kubenswrapper[5017]: I0129 08:12:28.680352 5017 generic.go:334] "Generic (PLEG): container finished" podID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerID="8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291" exitCode=0 Jan 29 08:12:28 crc kubenswrapper[5017]: I0129 08:12:28.682848 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hll7k" event={"ID":"24ca248f-ec66-4853-925c-0bba88b0d7d4","Type":"ContainerDied","Data":"8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291"} Jan 29 08:12:29 crc kubenswrapper[5017]: I0129 08:12:29.702932 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hll7k" event={"ID":"24ca248f-ec66-4853-925c-0bba88b0d7d4","Type":"ContainerStarted","Data":"396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a"} Jan 29 08:12:29 crc kubenswrapper[5017]: I0129 08:12:29.839766 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:29 crc kubenswrapper[5017]: I0129 08:12:29.840950 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:12:29 crc kubenswrapper[5017]: I0129 08:12:29.869756 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hll7k" podStartSLOduration=2.410457637 podStartE2EDuration="4.869728111s" podCreationTimestamp="2026-01-29 08:12:25 +0000 UTC" firstStartedPulling="2026-01-29 08:12:26.65125975 +0000 UTC m=+5833.025707360" lastFinishedPulling="2026-01-29 08:12:29.110530224 +0000 UTC m=+5835.484977834" observedRunningTime="2026-01-29 08:12:29.735312264 +0000 UTC m=+5836.109759864" watchObservedRunningTime="2026-01-29 08:12:29.869728111 +0000 UTC m=+5836.244175721" Jan 29 08:12:29 crc kubenswrapper[5017]: I0129 08:12:29.870269 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 08:12:29 crc kubenswrapper[5017]: I0129 08:12:29.870916 5017 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 08:12:29 crc kubenswrapper[5017]: I0129 08:12:29.879322 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 08:12:30 crc kubenswrapper[5017]: I0129 08:12:30.074472 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 08:12:32 crc kubenswrapper[5017]: I0129 08:12:32.332667 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:32 crc kubenswrapper[5017]: I0129 08:12:32.416006 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:32 crc kubenswrapper[5017]: I0129 08:12:32.416466 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:32 crc kubenswrapper[5017]: I0129 08:12:32.418920 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67957468c7-bwnjw" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 29 08:12:33 crc kubenswrapper[5017]: I0129 08:12:33.131367 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68bd76c56f-gz2sk" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 29 08:12:35 crc kubenswrapper[5017]: I0129 08:12:35.316664 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:12:35 crc kubenswrapper[5017]: E0129 08:12:35.317530 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:12:35 crc kubenswrapper[5017]: I0129 08:12:35.675787 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:35 crc kubenswrapper[5017]: I0129 08:12:35.675985 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:35 crc kubenswrapper[5017]: I0129 08:12:35.725052 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:35 crc kubenswrapper[5017]: I0129 08:12:35.898831 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.290271 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hll7k"] Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.291930 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hll7k" podUID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerName="registry-server" containerID="cri-o://396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a" gracePeriod=2 Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.876318 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.936219 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf2fm\" (UniqueName: \"kubernetes.io/projected/24ca248f-ec66-4853-925c-0bba88b0d7d4-kube-api-access-rf2fm\") pod \"24ca248f-ec66-4853-925c-0bba88b0d7d4\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.936303 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-catalog-content\") pod \"24ca248f-ec66-4853-925c-0bba88b0d7d4\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.936553 5017 generic.go:334] "Generic (PLEG): container finished" podID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerID="396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a" exitCode=0 Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.936591 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-utilities\") pod \"24ca248f-ec66-4853-925c-0bba88b0d7d4\" (UID: \"24ca248f-ec66-4853-925c-0bba88b0d7d4\") " Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.936620 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hll7k" event={"ID":"24ca248f-ec66-4853-925c-0bba88b0d7d4","Type":"ContainerDied","Data":"396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a"} Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.936670 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hll7k" event={"ID":"24ca248f-ec66-4853-925c-0bba88b0d7d4","Type":"ContainerDied","Data":"7e4dc0cce7d1e5e2de284467e3cddddd4ff55a1d546f2179c10e81595a263bb9"} Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.936693 5017 scope.go:117] "RemoveContainer" containerID="396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a" Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.937191 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hll7k" Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.938199 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-utilities" (OuterVolumeSpecName: "utilities") pod "24ca248f-ec66-4853-925c-0bba88b0d7d4" (UID: "24ca248f-ec66-4853-925c-0bba88b0d7d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.939774 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.948093 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ca248f-ec66-4853-925c-0bba88b0d7d4-kube-api-access-rf2fm" (OuterVolumeSpecName: "kube-api-access-rf2fm") pod "24ca248f-ec66-4853-925c-0bba88b0d7d4" (UID: "24ca248f-ec66-4853-925c-0bba88b0d7d4"). InnerVolumeSpecName "kube-api-access-rf2fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:39 crc kubenswrapper[5017]: I0129 08:12:39.962013 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24ca248f-ec66-4853-925c-0bba88b0d7d4" (UID: "24ca248f-ec66-4853-925c-0bba88b0d7d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.029200 5017 scope.go:117] "RemoveContainer" containerID="8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.042501 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf2fm\" (UniqueName: \"kubernetes.io/projected/24ca248f-ec66-4853-925c-0bba88b0d7d4-kube-api-access-rf2fm\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.042846 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ca248f-ec66-4853-925c-0bba88b0d7d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.071251 5017 scope.go:117] "RemoveContainer" containerID="a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.112397 5017 scope.go:117] "RemoveContainer" containerID="396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a" Jan 29 08:12:40 crc kubenswrapper[5017]: E0129 08:12:40.113519 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a\": container with ID starting with 396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a not found: ID does not exist" containerID="396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.113667 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a"} err="failed to get container status \"396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a\": rpc error: code = NotFound desc = could not find container \"396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a\": container with ID starting with 396a7f2b2c2fd9d3b1377551fc97a5779778bc7f471ee636ec93a1f9cfc7d23a not found: ID does not exist" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.113777 5017 scope.go:117] "RemoveContainer" containerID="8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291" Jan 29 08:12:40 crc kubenswrapper[5017]: E0129 08:12:40.114618 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291\": container with ID starting with 8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291 not found: ID does not exist" containerID="8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.114701 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291"} err="failed to get container status \"8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291\": rpc error: code = NotFound desc = could not find container \"8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291\": container with ID starting with 8a190671368dea048c444628144cf1df7262578e2da2e846c35ba3b8c9390291 not found: ID does not exist" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.114726 5017 scope.go:117] "RemoveContainer" containerID="a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca" Jan 29 08:12:40 crc kubenswrapper[5017]: E0129 08:12:40.115149 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca\": container with ID starting with a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca not found: ID does not exist" containerID="a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.115188 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca"} err="failed to get container status \"a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca\": rpc error: code = NotFound desc = could not find container \"a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca\": container with ID starting with a211280f2719167139c8e91f49b7969a864fb6a0b0f4e12cc0d95624d293ffca not found: ID does not exist" Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.278788 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hll7k"] Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.291004 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hll7k"] Jan 29 08:12:40 crc kubenswrapper[5017]: I0129 08:12:40.328153 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ca248f-ec66-4853-925c-0bba88b0d7d4" path="/var/lib/kubelet/pods/24ca248f-ec66-4853-925c-0bba88b0d7d4/volumes" Jan 29 08:12:42 crc kubenswrapper[5017]: I0129 08:12:42.416348 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67957468c7-bwnjw" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 29 08:12:43 crc kubenswrapper[5017]: I0129 08:12:43.129205 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68bd76c56f-gz2sk" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 29 08:12:49 crc kubenswrapper[5017]: I0129 08:12:49.317376 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:12:49 crc kubenswrapper[5017]: E0129 08:12:49.318591 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.040497 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.128163 5017 generic.go:334] "Generic (PLEG): container finished" podID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerID="e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4" exitCode=137 Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.128203 5017 generic.go:334] "Generic (PLEG): container finished" podID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerID="c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2" exitCode=137 Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.128234 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c45fcbd9c-jkwb8" event={"ID":"4075638f-fbb3-488a-83b6-a7a0321ca8ff","Type":"ContainerDied","Data":"e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4"} Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.128272 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c45fcbd9c-jkwb8" event={"ID":"4075638f-fbb3-488a-83b6-a7a0321ca8ff","Type":"ContainerDied","Data":"c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2"} Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.128284 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c45fcbd9c-jkwb8" event={"ID":"4075638f-fbb3-488a-83b6-a7a0321ca8ff","Type":"ContainerDied","Data":"16cb74d44120a8110c037e309a11d7d1b327db1dd1277f50d463869210f20cae"} Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.128302 5017 scope.go:117] "RemoveContainer" containerID="e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.128536 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c45fcbd9c-jkwb8" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.154558 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4075638f-fbb3-488a-83b6-a7a0321ca8ff-logs\") pod \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.154736 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2fxh\" (UniqueName: \"kubernetes.io/projected/4075638f-fbb3-488a-83b6-a7a0321ca8ff-kube-api-access-q2fxh\") pod \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.154781 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4075638f-fbb3-488a-83b6-a7a0321ca8ff-horizon-secret-key\") pod \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.155127 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-scripts\") pod \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.155152 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4075638f-fbb3-488a-83b6-a7a0321ca8ff-logs" (OuterVolumeSpecName: "logs") pod "4075638f-fbb3-488a-83b6-a7a0321ca8ff" (UID: "4075638f-fbb3-488a-83b6-a7a0321ca8ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.155223 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-config-data\") pod \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\" (UID: \"4075638f-fbb3-488a-83b6-a7a0321ca8ff\") " Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.156027 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4075638f-fbb3-488a-83b6-a7a0321ca8ff-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.169555 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4075638f-fbb3-488a-83b6-a7a0321ca8ff-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4075638f-fbb3-488a-83b6-a7a0321ca8ff" (UID: "4075638f-fbb3-488a-83b6-a7a0321ca8ff"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.179668 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4075638f-fbb3-488a-83b6-a7a0321ca8ff-kube-api-access-q2fxh" (OuterVolumeSpecName: "kube-api-access-q2fxh") pod "4075638f-fbb3-488a-83b6-a7a0321ca8ff" (UID: "4075638f-fbb3-488a-83b6-a7a0321ca8ff"). InnerVolumeSpecName "kube-api-access-q2fxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.197891 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-config-data" (OuterVolumeSpecName: "config-data") pod "4075638f-fbb3-488a-83b6-a7a0321ca8ff" (UID: "4075638f-fbb3-488a-83b6-a7a0321ca8ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.206348 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-scripts" (OuterVolumeSpecName: "scripts") pod "4075638f-fbb3-488a-83b6-a7a0321ca8ff" (UID: "4075638f-fbb3-488a-83b6-a7a0321ca8ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.258454 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2fxh\" (UniqueName: \"kubernetes.io/projected/4075638f-fbb3-488a-83b6-a7a0321ca8ff-kube-api-access-q2fxh\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.258502 5017 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4075638f-fbb3-488a-83b6-a7a0321ca8ff-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.258518 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.258528 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4075638f-fbb3-488a-83b6-a7a0321ca8ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.328030 5017 scope.go:117] "RemoveContainer" containerID="c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.373654 5017 scope.go:117] "RemoveContainer" containerID="e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4" Jan 29 08:12:53 crc kubenswrapper[5017]: E0129 08:12:53.374403 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4\": container with ID starting with e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4 not found: ID does not exist" containerID="e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.374473 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4"} err="failed to get container status \"e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4\": rpc error: code = NotFound desc = could not find container \"e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4\": container with ID starting with e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4 not found: ID does not exist" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.374505 5017 scope.go:117] "RemoveContainer" containerID="c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2" Jan 29 08:12:53 crc kubenswrapper[5017]: E0129 08:12:53.375017 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2\": container with ID starting with c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2 not found: ID does not exist" containerID="c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.375120 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2"} err="failed to get container status \"c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2\": rpc error: code = NotFound desc = could not find container \"c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2\": container with ID starting with c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2 not found: ID does not exist" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.375155 5017 scope.go:117] "RemoveContainer" containerID="e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.375566 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4"} err="failed to get container status \"e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4\": rpc error: code = NotFound desc = could not find container \"e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4\": container with ID starting with e2d43711073d1ddb622684d810635ed10bac3766ee8d2fcf416450201f5ae4e4 not found: ID does not exist" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.375613 5017 scope.go:117] "RemoveContainer" containerID="c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.375899 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2"} err="failed to get container status \"c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2\": rpc error: code = NotFound desc = could not find container \"c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2\": container with ID starting with c409e93040ed14d338c6e54ee4f7806afc6e6fd0f227853c577c5abb0e48c2e2 not found: ID does not exist" Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.475697 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c45fcbd9c-jkwb8"] Jan 29 08:12:53 crc kubenswrapper[5017]: I0129 08:12:53.489319 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c45fcbd9c-jkwb8"] Jan 29 08:12:54 crc kubenswrapper[5017]: I0129 08:12:54.334202 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" path="/var/lib/kubelet/pods/4075638f-fbb3-488a-83b6-a7a0321ca8ff/volumes" Jan 29 08:12:54 crc kubenswrapper[5017]: I0129 08:12:54.353350 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:54 crc kubenswrapper[5017]: I0129 08:12:54.986333 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:56 crc kubenswrapper[5017]: I0129 08:12:56.344071 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:12:56 crc kubenswrapper[5017]: I0129 08:12:56.992145 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:12:57 crc kubenswrapper[5017]: I0129 08:12:57.078045 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67957468c7-bwnjw"] Jan 29 08:12:57 crc kubenswrapper[5017]: I0129 08:12:57.189305 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67957468c7-bwnjw" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon-log" containerID="cri-o://3aad72d811e64531c6b825799f95e70e7b6fb7500ea341e54c9048a2b7cf40a5" gracePeriod=30 Jan 29 08:12:57 crc kubenswrapper[5017]: I0129 08:12:57.189372 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67957468c7-bwnjw" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon" containerID="cri-o://0fdfa3c17a6578b759ae683b8d7702bd36abcf726fc7b1de9257c8e0c8873dc8" gracePeriod=30 Jan 29 08:13:01 crc kubenswrapper[5017]: I0129 08:13:01.230118 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerID="0fdfa3c17a6578b759ae683b8d7702bd36abcf726fc7b1de9257c8e0c8873dc8" exitCode=0 Jan 29 08:13:01 crc kubenswrapper[5017]: I0129 08:13:01.230343 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67957468c7-bwnjw" event={"ID":"3c535b30-9dff-483b-a2d4-2c278dfba773","Type":"ContainerDied","Data":"0fdfa3c17a6578b759ae683b8d7702bd36abcf726fc7b1de9257c8e0c8873dc8"} Jan 29 08:13:02 crc kubenswrapper[5017]: I0129 08:13:02.416913 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-67957468c7-bwnjw" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 29 08:13:03 crc kubenswrapper[5017]: I0129 08:13:03.561758 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-4pmt6" podUID="6bd9b034-cfec-4194-9b45-318ed8625994" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 08:13:04 crc kubenswrapper[5017]: I0129 08:13:04.199677 5017 trace.go:236] Trace[1877112740]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-2" (29-Jan-2026 08:13:02.576) (total time: 1622ms): Jan 29 08:13:04 crc kubenswrapper[5017]: Trace[1877112740]: [1.622898294s] [1.622898294s] END Jan 29 08:13:04 crc kubenswrapper[5017]: I0129 08:13:04.324332 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:13:04 crc kubenswrapper[5017]: E0129 08:13:04.324908 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.749288 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fb5dbb6f-r76xr"] Jan 29 08:13:06 crc kubenswrapper[5017]: E0129 08:13:06.750788 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerName="registry-server" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.750807 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerName="registry-server" Jan 29 08:13:06 crc kubenswrapper[5017]: E0129 08:13:06.750826 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerName="horizon" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.750833 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerName="horizon" Jan 29 08:13:06 crc kubenswrapper[5017]: E0129 08:13:06.750858 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerName="horizon-log" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.750865 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerName="horizon-log" Jan 29 08:13:06 crc kubenswrapper[5017]: E0129 08:13:06.750877 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerName="extract-utilities" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.750883 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerName="extract-utilities" Jan 29 08:13:06 crc kubenswrapper[5017]: E0129 08:13:06.750905 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerName="extract-content" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.750911 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerName="extract-content" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.751189 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerName="horizon-log" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.751211 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ca248f-ec66-4853-925c-0bba88b0d7d4" containerName="registry-server" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.751221 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4075638f-fbb3-488a-83b6-a7a0321ca8ff" containerName="horizon" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.752432 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.770107 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb5dbb6f-r76xr"] Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.807437 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e5184f9-0919-464f-927e-2fd42d651b76-config-data\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.807509 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e5184f9-0919-464f-927e-2fd42d651b76-scripts\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.807649 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvsgv\" (UniqueName: \"kubernetes.io/projected/1e5184f9-0919-464f-927e-2fd42d651b76-kube-api-access-pvsgv\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.807710 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e5184f9-0919-464f-927e-2fd42d651b76-logs\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.807735 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e5184f9-0919-464f-927e-2fd42d651b76-horizon-secret-key\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.910105 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e5184f9-0919-464f-927e-2fd42d651b76-config-data\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.910200 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e5184f9-0919-464f-927e-2fd42d651b76-scripts\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.911111 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e5184f9-0919-464f-927e-2fd42d651b76-scripts\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.911311 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvsgv\" (UniqueName: \"kubernetes.io/projected/1e5184f9-0919-464f-927e-2fd42d651b76-kube-api-access-pvsgv\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.911482 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e5184f9-0919-464f-927e-2fd42d651b76-logs\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.911517 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e5184f9-0919-464f-927e-2fd42d651b76-horizon-secret-key\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.911581 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e5184f9-0919-464f-927e-2fd42d651b76-config-data\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.912172 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e5184f9-0919-464f-927e-2fd42d651b76-logs\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.935558 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e5184f9-0919-464f-927e-2fd42d651b76-horizon-secret-key\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:06 crc kubenswrapper[5017]: I0129 08:13:06.935704 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvsgv\" (UniqueName: \"kubernetes.io/projected/1e5184f9-0919-464f-927e-2fd42d651b76-kube-api-access-pvsgv\") pod \"horizon-5fb5dbb6f-r76xr\" (UID: \"1e5184f9-0919-464f-927e-2fd42d651b76\") " pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:07 crc kubenswrapper[5017]: I0129 08:13:07.083400 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:07 crc kubenswrapper[5017]: I0129 08:13:07.669563 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb5dbb6f-r76xr"] Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.199942 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-dxdg2"] Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.205299 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.215295 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-dxdg2"] Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.220048 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb5dbb6f-r76xr" event={"ID":"1e5184f9-0919-464f-927e-2fd42d651b76","Type":"ContainerStarted","Data":"e835acb64626dc4a694918ec91de5d7bb25a3d36677b3fc066b3d4c09bd1c8dd"} Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.220111 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb5dbb6f-r76xr" event={"ID":"1e5184f9-0919-464f-927e-2fd42d651b76","Type":"ContainerStarted","Data":"fac5a53f24b99af3ec6c4f88e18ebd29b8bd400901861061b3d394c274557902"} Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.220130 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb5dbb6f-r76xr" event={"ID":"1e5184f9-0919-464f-927e-2fd42d651b76","Type":"ContainerStarted","Data":"c82959c0fa17a5bac0d6dac357b65ac43f349204accc2f6ad7cedadb1e916dc4"} Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.251699 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkxj4\" (UniqueName: \"kubernetes.io/projected/e912e972-f106-4132-b64c-ef779807fe93-kube-api-access-wkxj4\") pod \"heat-db-create-dxdg2\" (UID: \"e912e972-f106-4132-b64c-ef779807fe93\") " pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.252223 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e912e972-f106-4132-b64c-ef779807fe93-operator-scripts\") pod \"heat-db-create-dxdg2\" (UID: \"e912e972-f106-4132-b64c-ef779807fe93\") " pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.305166 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5fb5dbb6f-r76xr" podStartSLOduration=2.305133573 podStartE2EDuration="2.305133573s" podCreationTimestamp="2026-01-29 08:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:13:08.283548805 +0000 UTC m=+5874.657996425" watchObservedRunningTime="2026-01-29 08:13:08.305133573 +0000 UTC m=+5874.679581183" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.354477 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxj4\" (UniqueName: \"kubernetes.io/projected/e912e972-f106-4132-b64c-ef779807fe93-kube-api-access-wkxj4\") pod \"heat-db-create-dxdg2\" (UID: \"e912e972-f106-4132-b64c-ef779807fe93\") " pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.354798 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e912e972-f106-4132-b64c-ef779807fe93-operator-scripts\") pod \"heat-db-create-dxdg2\" (UID: \"e912e972-f106-4132-b64c-ef779807fe93\") " pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.356144 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e912e972-f106-4132-b64c-ef779807fe93-operator-scripts\") pod \"heat-db-create-dxdg2\" (UID: \"e912e972-f106-4132-b64c-ef779807fe93\") " pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.380029 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxj4\" (UniqueName: \"kubernetes.io/projected/e912e972-f106-4132-b64c-ef779807fe93-kube-api-access-wkxj4\") pod \"heat-db-create-dxdg2\" (UID: \"e912e972-f106-4132-b64c-ef779807fe93\") " pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.407036 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-a5b8-account-create-update-lp9xd"] Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.408804 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.411484 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.425229 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-a5b8-account-create-update-lp9xd"] Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.457331 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8d9j\" (UniqueName: \"kubernetes.io/projected/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-kube-api-access-n8d9j\") pod \"heat-a5b8-account-create-update-lp9xd\" (UID: \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\") " pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.457514 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-operator-scripts\") pod \"heat-a5b8-account-create-update-lp9xd\" (UID: \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\") " pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.530773 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.559527 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-operator-scripts\") pod \"heat-a5b8-account-create-update-lp9xd\" (UID: \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\") " pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.559674 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8d9j\" (UniqueName: \"kubernetes.io/projected/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-kube-api-access-n8d9j\") pod \"heat-a5b8-account-create-update-lp9xd\" (UID: \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\") " pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.560890 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-operator-scripts\") pod \"heat-a5b8-account-create-update-lp9xd\" (UID: \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\") " pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.581345 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8d9j\" (UniqueName: \"kubernetes.io/projected/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-kube-api-access-n8d9j\") pod \"heat-a5b8-account-create-update-lp9xd\" (UID: \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\") " pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:08 crc kubenswrapper[5017]: I0129 08:13:08.749802 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:09 crc kubenswrapper[5017]: I0129 08:13:09.104682 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-dxdg2"] Jan 29 08:13:09 crc kubenswrapper[5017]: I0129 08:13:09.239095 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-dxdg2" event={"ID":"e912e972-f106-4132-b64c-ef779807fe93","Type":"ContainerStarted","Data":"75be839a9f26ce15f73a0a504c265f2eeeb56548b15e3a9cd396d5a67241ac8a"} Jan 29 08:13:09 crc kubenswrapper[5017]: I0129 08:13:09.405064 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-a5b8-account-create-update-lp9xd"] Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.098260 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rd2dv"] Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.107329 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8266-account-create-update-dzrm4"] Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.116716 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rd2dv"] Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.126413 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8266-account-create-update-dzrm4"] Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.249842 5017 generic.go:334] "Generic (PLEG): container finished" podID="e912e972-f106-4132-b64c-ef779807fe93" containerID="7457ea8d56578ac81359989230e730e6ac34800d0fc802aed6aad64548b636fd" exitCode=0 Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.249943 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-dxdg2" event={"ID":"e912e972-f106-4132-b64c-ef779807fe93","Type":"ContainerDied","Data":"7457ea8d56578ac81359989230e730e6ac34800d0fc802aed6aad64548b636fd"} Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.253433 5017 generic.go:334] "Generic (PLEG): container finished" podID="eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5" containerID="f54af1d9f040a524032cf7e43f1f957baad75b9647a4d38733efafc121d209b8" exitCode=0 Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.253482 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a5b8-account-create-update-lp9xd" event={"ID":"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5","Type":"ContainerDied","Data":"f54af1d9f040a524032cf7e43f1f957baad75b9647a4d38733efafc121d209b8"} Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.253578 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a5b8-account-create-update-lp9xd" event={"ID":"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5","Type":"ContainerStarted","Data":"67fbd89b67798272ba0af09a460d6f4d3e2ebcf8c72fab921394a2864c9586f0"} Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.335146 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="203ccd24-c1b1-4e3a-8b76-e47f88f21791" path="/var/lib/kubelet/pods/203ccd24-c1b1-4e3a-8b76-e47f88f21791/volumes" Jan 29 08:13:10 crc kubenswrapper[5017]: I0129 08:13:10.338101 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48f0b82-d5fe-4687-956c-779a52a0bf67" path="/var/lib/kubelet/pods/f48f0b82-d5fe-4687-956c-779a52a0bf67/volumes" Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.882122 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.896399 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.975039 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-operator-scripts\") pod \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\" (UID: \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\") " Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.975298 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e912e972-f106-4132-b64c-ef779807fe93-operator-scripts\") pod \"e912e972-f106-4132-b64c-ef779807fe93\" (UID: \"e912e972-f106-4132-b64c-ef779807fe93\") " Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.975410 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkxj4\" (UniqueName: \"kubernetes.io/projected/e912e972-f106-4132-b64c-ef779807fe93-kube-api-access-wkxj4\") pod \"e912e972-f106-4132-b64c-ef779807fe93\" (UID: \"e912e972-f106-4132-b64c-ef779807fe93\") " Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.975436 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8d9j\" (UniqueName: \"kubernetes.io/projected/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-kube-api-access-n8d9j\") pod \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\" (UID: \"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5\") " Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.976113 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e912e972-f106-4132-b64c-ef779807fe93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e912e972-f106-4132-b64c-ef779807fe93" (UID: "e912e972-f106-4132-b64c-ef779807fe93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.976206 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5" (UID: "eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.983750 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-kube-api-access-n8d9j" (OuterVolumeSpecName: "kube-api-access-n8d9j") pod "eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5" (UID: "eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5"). InnerVolumeSpecName "kube-api-access-n8d9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:11 crc kubenswrapper[5017]: I0129 08:13:11.984408 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e912e972-f106-4132-b64c-ef779807fe93-kube-api-access-wkxj4" (OuterVolumeSpecName: "kube-api-access-wkxj4") pod "e912e972-f106-4132-b64c-ef779807fe93" (UID: "e912e972-f106-4132-b64c-ef779807fe93"). InnerVolumeSpecName "kube-api-access-wkxj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.078428 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.078798 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e912e972-f106-4132-b64c-ef779807fe93-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.078812 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkxj4\" (UniqueName: \"kubernetes.io/projected/e912e972-f106-4132-b64c-ef779807fe93-kube-api-access-wkxj4\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.078826 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8d9j\" (UniqueName: \"kubernetes.io/projected/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5-kube-api-access-n8d9j\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.277710 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a5b8-account-create-update-lp9xd" event={"ID":"eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5","Type":"ContainerDied","Data":"67fbd89b67798272ba0af09a460d6f4d3e2ebcf8c72fab921394a2864c9586f0"} Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.277772 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67fbd89b67798272ba0af09a460d6f4d3e2ebcf8c72fab921394a2864c9586f0" Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.277806 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a5b8-account-create-update-lp9xd" Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.279912 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-dxdg2" event={"ID":"e912e972-f106-4132-b64c-ef779807fe93","Type":"ContainerDied","Data":"75be839a9f26ce15f73a0a504c265f2eeeb56548b15e3a9cd396d5a67241ac8a"} Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.279985 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75be839a9f26ce15f73a0a504c265f2eeeb56548b15e3a9cd396d5a67241ac8a" Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.280003 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dxdg2" Jan 29 08:13:12 crc kubenswrapper[5017]: I0129 08:13:12.416182 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-67957468c7-bwnjw" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.466264 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jh74k"] Jan 29 08:13:13 crc kubenswrapper[5017]: E0129 08:13:13.466864 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e912e972-f106-4132-b64c-ef779807fe93" containerName="mariadb-database-create" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.466884 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e912e972-f106-4132-b64c-ef779807fe93" containerName="mariadb-database-create" Jan 29 08:13:13 crc kubenswrapper[5017]: E0129 08:13:13.466902 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5" containerName="mariadb-account-create-update" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.466911 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5" containerName="mariadb-account-create-update" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.467176 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e912e972-f106-4132-b64c-ef779807fe93" containerName="mariadb-database-create" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.467196 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5" containerName="mariadb-account-create-update" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.468226 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.472277 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.479020 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-xttxz" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.486593 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jh74k"] Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.517930 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpmrf\" (UniqueName: \"kubernetes.io/projected/d0efccca-8bbf-4612-8c12-1508bdb868cc-kube-api-access-lpmrf\") pod \"heat-db-sync-jh74k\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.518075 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-config-data\") pod \"heat-db-sync-jh74k\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.518247 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-combined-ca-bundle\") pod \"heat-db-sync-jh74k\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.620236 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-config-data\") pod \"heat-db-sync-jh74k\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.620347 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-combined-ca-bundle\") pod \"heat-db-sync-jh74k\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.620492 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpmrf\" (UniqueName: \"kubernetes.io/projected/d0efccca-8bbf-4612-8c12-1508bdb868cc-kube-api-access-lpmrf\") pod \"heat-db-sync-jh74k\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.627419 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-config-data\") pod \"heat-db-sync-jh74k\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.628509 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-combined-ca-bundle\") pod \"heat-db-sync-jh74k\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.636824 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpmrf\" (UniqueName: \"kubernetes.io/projected/d0efccca-8bbf-4612-8c12-1508bdb868cc-kube-api-access-lpmrf\") pod \"heat-db-sync-jh74k\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:13 crc kubenswrapper[5017]: I0129 08:13:13.800646 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:14 crc kubenswrapper[5017]: I0129 08:13:14.442152 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jh74k"] Jan 29 08:13:15 crc kubenswrapper[5017]: I0129 08:13:15.326667 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:13:15 crc kubenswrapper[5017]: E0129 08:13:15.327291 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:13:15 crc kubenswrapper[5017]: I0129 08:13:15.370446 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jh74k" event={"ID":"d0efccca-8bbf-4612-8c12-1508bdb868cc","Type":"ContainerStarted","Data":"3e1b50339730e2265be59dee2bb0342a697cd68aaa8f247fbb861af764dcfd80"} Jan 29 08:13:16 crc kubenswrapper[5017]: I0129 08:13:16.048851 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-5jcwd"] Jan 29 08:13:16 crc kubenswrapper[5017]: I0129 08:13:16.062900 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-5jcwd"] Jan 29 08:13:16 crc kubenswrapper[5017]: I0129 08:13:16.332422 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b4ffe21-5562-4339-b707-08b117ecce8f" path="/var/lib/kubelet/pods/1b4ffe21-5562-4339-b707-08b117ecce8f/volumes" Jan 29 08:13:17 crc kubenswrapper[5017]: I0129 08:13:17.084266 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:17 crc kubenswrapper[5017]: I0129 08:13:17.086001 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:22 crc kubenswrapper[5017]: I0129 08:13:22.416797 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-67957468c7-bwnjw" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 29 08:13:22 crc kubenswrapper[5017]: I0129 08:13:22.417926 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:13:22 crc kubenswrapper[5017]: I0129 08:13:22.458763 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jh74k" event={"ID":"d0efccca-8bbf-4612-8c12-1508bdb868cc","Type":"ContainerStarted","Data":"4d18d43381e36b6613d4c6d3faa3255024bad0401fc40ea0052af544b28ea5f9"} Jan 29 08:13:22 crc kubenswrapper[5017]: I0129 08:13:22.487547 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jh74k" podStartSLOduration=1.806506604 podStartE2EDuration="9.487444213s" podCreationTimestamp="2026-01-29 08:13:13 +0000 UTC" firstStartedPulling="2026-01-29 08:13:14.459141514 +0000 UTC m=+5880.833589124" lastFinishedPulling="2026-01-29 08:13:22.140079113 +0000 UTC m=+5888.514526733" observedRunningTime="2026-01-29 08:13:22.475638449 +0000 UTC m=+5888.850086089" watchObservedRunningTime="2026-01-29 08:13:22.487444213 +0000 UTC m=+5888.861891823" Jan 29 08:13:25 crc kubenswrapper[5017]: I0129 08:13:25.037982 5017 scope.go:117] "RemoveContainer" containerID="c51e66d68d04613b9c87ff35a3adeb0a430f02851c79256577987e405af5d776" Jan 29 08:13:25 crc kubenswrapper[5017]: I0129 08:13:25.064996 5017 scope.go:117] "RemoveContainer" containerID="7e9b1392f50de03b869456cb2e23a005dca93670d2e82d9698a8dd0df427434a" Jan 29 08:13:25 crc kubenswrapper[5017]: I0129 08:13:25.117876 5017 scope.go:117] "RemoveContainer" containerID="a573821c78ef9509c0618e4fc6bc30da548231b880713b842998e78f7d2db170" Jan 29 08:13:25 crc kubenswrapper[5017]: I0129 08:13:25.512805 5017 generic.go:334] "Generic (PLEG): container finished" podID="d0efccca-8bbf-4612-8c12-1508bdb868cc" containerID="4d18d43381e36b6613d4c6d3faa3255024bad0401fc40ea0052af544b28ea5f9" exitCode=0 Jan 29 08:13:25 crc kubenswrapper[5017]: I0129 08:13:25.512875 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jh74k" event={"ID":"d0efccca-8bbf-4612-8c12-1508bdb868cc","Type":"ContainerDied","Data":"4d18d43381e36b6613d4c6d3faa3255024bad0401fc40ea0052af544b28ea5f9"} Jan 29 08:13:26 crc kubenswrapper[5017]: I0129 08:13:26.932523 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.086620 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb5dbb6f-r76xr" podUID="1e5184f9-0919-464f-927e-2fd42d651b76" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.102022 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-config-data\") pod \"d0efccca-8bbf-4612-8c12-1508bdb868cc\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.102212 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpmrf\" (UniqueName: \"kubernetes.io/projected/d0efccca-8bbf-4612-8c12-1508bdb868cc-kube-api-access-lpmrf\") pod \"d0efccca-8bbf-4612-8c12-1508bdb868cc\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.102252 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-combined-ca-bundle\") pod \"d0efccca-8bbf-4612-8c12-1508bdb868cc\" (UID: \"d0efccca-8bbf-4612-8c12-1508bdb868cc\") " Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.109211 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0efccca-8bbf-4612-8c12-1508bdb868cc-kube-api-access-lpmrf" (OuterVolumeSpecName: "kube-api-access-lpmrf") pod "d0efccca-8bbf-4612-8c12-1508bdb868cc" (UID: "d0efccca-8bbf-4612-8c12-1508bdb868cc"). InnerVolumeSpecName "kube-api-access-lpmrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.135759 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0efccca-8bbf-4612-8c12-1508bdb868cc" (UID: "d0efccca-8bbf-4612-8c12-1508bdb868cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.194679 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-config-data" (OuterVolumeSpecName: "config-data") pod "d0efccca-8bbf-4612-8c12-1508bdb868cc" (UID: "d0efccca-8bbf-4612-8c12-1508bdb868cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.205791 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.206500 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpmrf\" (UniqueName: \"kubernetes.io/projected/d0efccca-8bbf-4612-8c12-1508bdb868cc-kube-api-access-lpmrf\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.206530 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0efccca-8bbf-4612-8c12-1508bdb868cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.537703 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerID="3aad72d811e64531c6b825799f95e70e7b6fb7500ea341e54c9048a2b7cf40a5" exitCode=137 Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.537842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67957468c7-bwnjw" event={"ID":"3c535b30-9dff-483b-a2d4-2c278dfba773","Type":"ContainerDied","Data":"3aad72d811e64531c6b825799f95e70e7b6fb7500ea341e54c9048a2b7cf40a5"} Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.539943 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jh74k" event={"ID":"d0efccca-8bbf-4612-8c12-1508bdb868cc","Type":"ContainerDied","Data":"3e1b50339730e2265be59dee2bb0342a697cd68aaa8f247fbb861af764dcfd80"} Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.539995 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e1b50339730e2265be59dee2bb0342a697cd68aaa8f247fbb861af764dcfd80" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.540076 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jh74k" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.569449 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.717507 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-scripts\") pod \"3c535b30-9dff-483b-a2d4-2c278dfba773\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.717635 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3c535b30-9dff-483b-a2d4-2c278dfba773-horizon-secret-key\") pod \"3c535b30-9dff-483b-a2d4-2c278dfba773\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.717769 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-config-data\") pod \"3c535b30-9dff-483b-a2d4-2c278dfba773\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.717820 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c535b30-9dff-483b-a2d4-2c278dfba773-logs\") pod \"3c535b30-9dff-483b-a2d4-2c278dfba773\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.717895 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p745n\" (UniqueName: \"kubernetes.io/projected/3c535b30-9dff-483b-a2d4-2c278dfba773-kube-api-access-p745n\") pod \"3c535b30-9dff-483b-a2d4-2c278dfba773\" (UID: \"3c535b30-9dff-483b-a2d4-2c278dfba773\") " Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.719408 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c535b30-9dff-483b-a2d4-2c278dfba773-logs" (OuterVolumeSpecName: "logs") pod "3c535b30-9dff-483b-a2d4-2c278dfba773" (UID: "3c535b30-9dff-483b-a2d4-2c278dfba773"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.724574 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c535b30-9dff-483b-a2d4-2c278dfba773-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3c535b30-9dff-483b-a2d4-2c278dfba773" (UID: "3c535b30-9dff-483b-a2d4-2c278dfba773"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.724736 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c535b30-9dff-483b-a2d4-2c278dfba773-kube-api-access-p745n" (OuterVolumeSpecName: "kube-api-access-p745n") pod "3c535b30-9dff-483b-a2d4-2c278dfba773" (UID: "3c535b30-9dff-483b-a2d4-2c278dfba773"). InnerVolumeSpecName "kube-api-access-p745n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.745098 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-config-data" (OuterVolumeSpecName: "config-data") pod "3c535b30-9dff-483b-a2d4-2c278dfba773" (UID: "3c535b30-9dff-483b-a2d4-2c278dfba773"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.753158 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-scripts" (OuterVolumeSpecName: "scripts") pod "3c535b30-9dff-483b-a2d4-2c278dfba773" (UID: "3c535b30-9dff-483b-a2d4-2c278dfba773"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.820578 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.821110 5017 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3c535b30-9dff-483b-a2d4-2c278dfba773-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.821124 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c535b30-9dff-483b-a2d4-2c278dfba773-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.821134 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c535b30-9dff-483b-a2d4-2c278dfba773-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:27 crc kubenswrapper[5017]: I0129 08:13:27.821148 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p745n\" (UniqueName: \"kubernetes.io/projected/3c535b30-9dff-483b-a2d4-2c278dfba773-kube-api-access-p745n\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.330893 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:13:28 crc kubenswrapper[5017]: E0129 08:13:28.331437 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.542228 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-69874cd655-hjsnn"] Jan 29 08:13:28 crc kubenswrapper[5017]: E0129 08:13:28.542820 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon-log" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.542847 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon-log" Jan 29 08:13:28 crc kubenswrapper[5017]: E0129 08:13:28.542896 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0efccca-8bbf-4612-8c12-1508bdb868cc" containerName="heat-db-sync" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.542907 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0efccca-8bbf-4612-8c12-1508bdb868cc" containerName="heat-db-sync" Jan 29 08:13:28 crc kubenswrapper[5017]: E0129 08:13:28.542928 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.542937 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.543204 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon-log" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.543235 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" containerName="horizon" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.543249 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0efccca-8bbf-4612-8c12-1508bdb868cc" containerName="heat-db-sync" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.544222 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.548804 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-xttxz" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.549107 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.549258 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.554512 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67957468c7-bwnjw" event={"ID":"3c535b30-9dff-483b-a2d4-2c278dfba773","Type":"ContainerDied","Data":"656e1bee6df2bacb6e7f8652552e6b0191f209e44f2c1bac01a0cfae4c46e492"} Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.554589 5017 scope.go:117] "RemoveContainer" containerID="0fdfa3c17a6578b759ae683b8d7702bd36abcf726fc7b1de9257c8e0c8873dc8" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.554821 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67957468c7-bwnjw" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.588866 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-69874cd655-hjsnn"] Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.631073 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67957468c7-bwnjw"] Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.646834 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbrtc\" (UniqueName: \"kubernetes.io/projected/e851a31c-da52-4ac3-877c-c7c62d9f09f9-kube-api-access-zbrtc\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.650397 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e851a31c-da52-4ac3-877c-c7c62d9f09f9-combined-ca-bundle\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.650597 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e851a31c-da52-4ac3-877c-c7c62d9f09f9-config-data-custom\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.650900 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e851a31c-da52-4ac3-877c-c7c62d9f09f9-config-data\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.667268 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67957468c7-bwnjw"] Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.706443 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-94c6ddf5-x7c8s"] Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.708250 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.711137 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.731805 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-94c6ddf5-x7c8s"] Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.753095 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbrtc\" (UniqueName: \"kubernetes.io/projected/e851a31c-da52-4ac3-877c-c7c62d9f09f9-kube-api-access-zbrtc\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.753653 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e851a31c-da52-4ac3-877c-c7c62d9f09f9-combined-ca-bundle\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.753719 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e851a31c-da52-4ac3-877c-c7c62d9f09f9-config-data-custom\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.753815 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e851a31c-da52-4ac3-877c-c7c62d9f09f9-config-data\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.770144 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e851a31c-da52-4ac3-877c-c7c62d9f09f9-config-data\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.780602 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e851a31c-da52-4ac3-877c-c7c62d9f09f9-combined-ca-bundle\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.782003 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e851a31c-da52-4ac3-877c-c7c62d9f09f9-config-data-custom\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.797196 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-69769694fd-j796t"] Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.798691 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbrtc\" (UniqueName: \"kubernetes.io/projected/e851a31c-da52-4ac3-877c-c7c62d9f09f9-kube-api-access-zbrtc\") pod \"heat-engine-69874cd655-hjsnn\" (UID: \"e851a31c-da52-4ac3-877c-c7c62d9f09f9\") " pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.799206 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.808889 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.848346 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69769694fd-j796t"] Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.859349 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-config-data\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.859499 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcxrt\" (UniqueName: \"kubernetes.io/projected/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-kube-api-access-hcxrt\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.860010 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-config-data-custom\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.860523 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-combined-ca-bundle\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.872937 5017 scope.go:117] "RemoveContainer" containerID="3aad72d811e64531c6b825799f95e70e7b6fb7500ea341e54c9048a2b7cf40a5" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.883734 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.962274 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-config-data\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.962351 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3aa567-ba75-42f4-967b-95147fc35f5a-config-data\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.962383 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3aa567-ba75-42f4-967b-95147fc35f5a-combined-ca-bundle\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.962411 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcxrt\" (UniqueName: \"kubernetes.io/projected/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-kube-api-access-hcxrt\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.962626 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-config-data-custom\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.962698 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn978\" (UniqueName: \"kubernetes.io/projected/ad3aa567-ba75-42f4-967b-95147fc35f5a-kube-api-access-jn978\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.962853 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-combined-ca-bundle\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.962875 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad3aa567-ba75-42f4-967b-95147fc35f5a-config-data-custom\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.968254 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-config-data\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.973459 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-config-data-custom\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.974420 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-combined-ca-bundle\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:28 crc kubenswrapper[5017]: I0129 08:13:28.984584 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcxrt\" (UniqueName: \"kubernetes.io/projected/79df1dcf-9dd7-41cb-8543-3bee7ab44bf2-kube-api-access-hcxrt\") pod \"heat-cfnapi-94c6ddf5-x7c8s\" (UID: \"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2\") " pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.042751 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.069091 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3aa567-ba75-42f4-967b-95147fc35f5a-combined-ca-bundle\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.069527 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn978\" (UniqueName: \"kubernetes.io/projected/ad3aa567-ba75-42f4-967b-95147fc35f5a-kube-api-access-jn978\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.069928 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad3aa567-ba75-42f4-967b-95147fc35f5a-config-data-custom\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.070436 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3aa567-ba75-42f4-967b-95147fc35f5a-config-data\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.073942 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3aa567-ba75-42f4-967b-95147fc35f5a-combined-ca-bundle\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.078251 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad3aa567-ba75-42f4-967b-95147fc35f5a-config-data-custom\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.083385 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3aa567-ba75-42f4-967b-95147fc35f5a-config-data\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.099842 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn978\" (UniqueName: \"kubernetes.io/projected/ad3aa567-ba75-42f4-967b-95147fc35f5a-kube-api-access-jn978\") pod \"heat-api-69769694fd-j796t\" (UID: \"ad3aa567-ba75-42f4-967b-95147fc35f5a\") " pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.168862 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.524075 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-69874cd655-hjsnn"] Jan 29 08:13:29 crc kubenswrapper[5017]: W0129 08:13:29.531071 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode851a31c_da52_4ac3_877c_c7c62d9f09f9.slice/crio-77c623ae47b3c04e3762fd6fc4b31798ced7742967c4c852dadda21b652c355a WatchSource:0}: Error finding container 77c623ae47b3c04e3762fd6fc4b31798ced7742967c4c852dadda21b652c355a: Status 404 returned error can't find the container with id 77c623ae47b3c04e3762fd6fc4b31798ced7742967c4c852dadda21b652c355a Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.579526 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69874cd655-hjsnn" event={"ID":"e851a31c-da52-4ac3-877c-c7c62d9f09f9","Type":"ContainerStarted","Data":"77c623ae47b3c04e3762fd6fc4b31798ced7742967c4c852dadda21b652c355a"} Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.700858 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69769694fd-j796t"] Jan 29 08:13:29 crc kubenswrapper[5017]: W0129 08:13:29.720111 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad3aa567_ba75_42f4_967b_95147fc35f5a.slice/crio-6108813918bf3d4e611c9fb64e3cc2f7b9a69acc6417377fbef02a56e36e0e8e WatchSource:0}: Error finding container 6108813918bf3d4e611c9fb64e3cc2f7b9a69acc6417377fbef02a56e36e0e8e: Status 404 returned error can't find the container with id 6108813918bf3d4e611c9fb64e3cc2f7b9a69acc6417377fbef02a56e36e0e8e Jan 29 08:13:29 crc kubenswrapper[5017]: I0129 08:13:29.724152 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-94c6ddf5-x7c8s"] Jan 29 08:13:29 crc kubenswrapper[5017]: W0129 08:13:29.726779 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79df1dcf_9dd7_41cb_8543_3bee7ab44bf2.slice/crio-4181ff965549de7071345157a0af2ab209af75c6d498ee8e3afa625cc6c84ab1 WatchSource:0}: Error finding container 4181ff965549de7071345157a0af2ab209af75c6d498ee8e3afa625cc6c84ab1: Status 404 returned error can't find the container with id 4181ff965549de7071345157a0af2ab209af75c6d498ee8e3afa625cc6c84ab1 Jan 29 08:13:30 crc kubenswrapper[5017]: I0129 08:13:30.333286 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c535b30-9dff-483b-a2d4-2c278dfba773" path="/var/lib/kubelet/pods/3c535b30-9dff-483b-a2d4-2c278dfba773/volumes" Jan 29 08:13:30 crc kubenswrapper[5017]: I0129 08:13:30.606278 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-69874cd655-hjsnn" event={"ID":"e851a31c-da52-4ac3-877c-c7c62d9f09f9","Type":"ContainerStarted","Data":"dd394b0e97b365b52ccec7d224883454182d920ec460e602b1f169ace9ee92bc"} Jan 29 08:13:30 crc kubenswrapper[5017]: I0129 08:13:30.608637 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69769694fd-j796t" event={"ID":"ad3aa567-ba75-42f4-967b-95147fc35f5a","Type":"ContainerStarted","Data":"6108813918bf3d4e611c9fb64e3cc2f7b9a69acc6417377fbef02a56e36e0e8e"} Jan 29 08:13:30 crc kubenswrapper[5017]: I0129 08:13:30.612320 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" event={"ID":"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2","Type":"ContainerStarted","Data":"4181ff965549de7071345157a0af2ab209af75c6d498ee8e3afa625cc6c84ab1"} Jan 29 08:13:30 crc kubenswrapper[5017]: I0129 08:13:30.628819 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-69874cd655-hjsnn" podStartSLOduration=2.628792986 podStartE2EDuration="2.628792986s" podCreationTimestamp="2026-01-29 08:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:13:30.62189944 +0000 UTC m=+5896.996347080" watchObservedRunningTime="2026-01-29 08:13:30.628792986 +0000 UTC m=+5897.003240596" Jan 29 08:13:31 crc kubenswrapper[5017]: I0129 08:13:31.637702 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:32 crc kubenswrapper[5017]: I0129 08:13:32.649428 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69769694fd-j796t" event={"ID":"ad3aa567-ba75-42f4-967b-95147fc35f5a","Type":"ContainerStarted","Data":"07f24ed60a5c968ba09bb2b989e3eb5e02cade39d038ba0bd382967de4979b16"} Jan 29 08:13:32 crc kubenswrapper[5017]: I0129 08:13:32.650032 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:32 crc kubenswrapper[5017]: I0129 08:13:32.653571 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" event={"ID":"79df1dcf-9dd7-41cb-8543-3bee7ab44bf2","Type":"ContainerStarted","Data":"7ae50483b6442c5d3e53fc711fb1a1103fa7c397aa2a3f68caa448a459458409"} Jan 29 08:13:32 crc kubenswrapper[5017]: I0129 08:13:32.666534 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-69769694fd-j796t" podStartSLOduration=2.890245013 podStartE2EDuration="4.666495058s" podCreationTimestamp="2026-01-29 08:13:28 +0000 UTC" firstStartedPulling="2026-01-29 08:13:29.725399747 +0000 UTC m=+5896.099847357" lastFinishedPulling="2026-01-29 08:13:31.501649792 +0000 UTC m=+5897.876097402" observedRunningTime="2026-01-29 08:13:32.664439179 +0000 UTC m=+5899.038886799" watchObservedRunningTime="2026-01-29 08:13:32.666495058 +0000 UTC m=+5899.040942668" Jan 29 08:13:32 crc kubenswrapper[5017]: I0129 08:13:32.695423 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" podStartSLOduration=2.934174977 podStartE2EDuration="4.695395442s" podCreationTimestamp="2026-01-29 08:13:28 +0000 UTC" firstStartedPulling="2026-01-29 08:13:29.739498765 +0000 UTC m=+5896.113946375" lastFinishedPulling="2026-01-29 08:13:31.50071923 +0000 UTC m=+5897.875166840" observedRunningTime="2026-01-29 08:13:32.684860939 +0000 UTC m=+5899.059308559" watchObservedRunningTime="2026-01-29 08:13:32.695395442 +0000 UTC m=+5899.069843052" Jan 29 08:13:33 crc kubenswrapper[5017]: I0129 08:13:33.663842 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:39 crc kubenswrapper[5017]: I0129 08:13:39.114439 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:40 crc kubenswrapper[5017]: I0129 08:13:40.316939 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:13:40 crc kubenswrapper[5017]: E0129 08:13:40.317919 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:13:40 crc kubenswrapper[5017]: I0129 08:13:40.418112 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-94c6ddf5-x7c8s" Jan 29 08:13:40 crc kubenswrapper[5017]: I0129 08:13:40.656225 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-69769694fd-j796t" Jan 29 08:13:41 crc kubenswrapper[5017]: I0129 08:13:41.028166 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5fb5dbb6f-r76xr" Jan 29 08:13:41 crc kubenswrapper[5017]: I0129 08:13:41.093679 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68bd76c56f-gz2sk"] Jan 29 08:13:41 crc kubenswrapper[5017]: I0129 08:13:41.096740 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68bd76c56f-gz2sk" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon-log" containerID="cri-o://6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6" gracePeriod=30 Jan 29 08:13:41 crc kubenswrapper[5017]: I0129 08:13:41.096995 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68bd76c56f-gz2sk" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon" containerID="cri-o://dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66" gracePeriod=30 Jan 29 08:13:44 crc kubenswrapper[5017]: I0129 08:13:44.239125 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68bd76c56f-gz2sk" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:47600->10.217.1.108:8080: read: connection reset by peer" Jan 29 08:13:44 crc kubenswrapper[5017]: I0129 08:13:44.782265 5017 generic.go:334] "Generic (PLEG): container finished" podID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerID="dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66" exitCode=0 Jan 29 08:13:44 crc kubenswrapper[5017]: I0129 08:13:44.782448 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bd76c56f-gz2sk" event={"ID":"bd7d0aff-5a86-446e-a1d8-228c27e71a18","Type":"ContainerDied","Data":"dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66"} Jan 29 08:13:45 crc kubenswrapper[5017]: I0129 08:13:45.054428 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8c0b-account-create-update-475f9"] Jan 29 08:13:45 crc kubenswrapper[5017]: I0129 08:13:45.071207 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jzhwl"] Jan 29 08:13:45 crc kubenswrapper[5017]: I0129 08:13:45.085116 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jzhwl"] Jan 29 08:13:45 crc kubenswrapper[5017]: I0129 08:13:45.098425 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8c0b-account-create-update-475f9"] Jan 29 08:13:46 crc kubenswrapper[5017]: I0129 08:13:46.329447 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e153971-7167-42ca-b001-2cf231b13310" path="/var/lib/kubelet/pods/4e153971-7167-42ca-b001-2cf231b13310/volumes" Jan 29 08:13:46 crc kubenswrapper[5017]: I0129 08:13:46.330885 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88436924-55af-421f-8da0-ba80f463a5e7" path="/var/lib/kubelet/pods/88436924-55af-421f-8da0-ba80f463a5e7/volumes" Jan 29 08:13:48 crc kubenswrapper[5017]: I0129 08:13:48.928086 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-69874cd655-hjsnn" Jan 29 08:13:53 crc kubenswrapper[5017]: I0129 08:13:53.129618 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68bd76c56f-gz2sk" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 29 08:13:54 crc kubenswrapper[5017]: I0129 08:13:54.324205 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:13:54 crc kubenswrapper[5017]: E0129 08:13:54.324452 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:13:55 crc kubenswrapper[5017]: I0129 08:13:55.038728 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qwfkb"] Jan 29 08:13:55 crc kubenswrapper[5017]: I0129 08:13:55.055573 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qwfkb"] Jan 29 08:13:56 crc kubenswrapper[5017]: I0129 08:13:56.334470 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4" path="/var/lib/kubelet/pods/075ffa52-6c07-4cd7-9c9d-3fec46d4c6a4/volumes" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.407531 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p"] Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.410573 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.413062 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.427009 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p"] Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.443621 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.443810 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.443853 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2pj\" (UniqueName: \"kubernetes.io/projected/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-kube-api-access-pq2pj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.546100 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.546298 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.546349 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2pj\" (UniqueName: \"kubernetes.io/projected/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-kube-api-access-pq2pj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.546924 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.546933 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.571912 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2pj\" (UniqueName: \"kubernetes.io/projected/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-kube-api-access-pq2pj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:58 crc kubenswrapper[5017]: I0129 08:13:58.737883 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:13:59 crc kubenswrapper[5017]: I0129 08:13:59.227414 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p"] Jan 29 08:13:59 crc kubenswrapper[5017]: I0129 08:13:59.950501 5017 generic.go:334] "Generic (PLEG): container finished" podID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerID="96213a63dc1cf27cfd027fc0be616e1cd26b234b5bd7c8247973e0f8bb5f29c7" exitCode=0 Jan 29 08:13:59 crc kubenswrapper[5017]: I0129 08:13:59.950626 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" event={"ID":"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a","Type":"ContainerDied","Data":"96213a63dc1cf27cfd027fc0be616e1cd26b234b5bd7c8247973e0f8bb5f29c7"} Jan 29 08:13:59 crc kubenswrapper[5017]: I0129 08:13:59.951204 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" event={"ID":"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a","Type":"ContainerStarted","Data":"b7c5e0473d05cfa929be17a91d1a3bd7ba940af020af087dac43272c224fa537"} Jan 29 08:14:01 crc kubenswrapper[5017]: I0129 08:14:01.973631 5017 generic.go:334] "Generic (PLEG): container finished" podID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerID="9d3ea655d701bd529f5785b5ae7e2a0aca0228a8c2045b309e0ed4b83a0abd0b" exitCode=0 Jan 29 08:14:01 crc kubenswrapper[5017]: I0129 08:14:01.973702 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" event={"ID":"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a","Type":"ContainerDied","Data":"9d3ea655d701bd529f5785b5ae7e2a0aca0228a8c2045b309e0ed4b83a0abd0b"} Jan 29 08:14:02 crc kubenswrapper[5017]: I0129 08:14:02.987343 5017 generic.go:334] "Generic (PLEG): container finished" podID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerID="7145d8b0e4cca0e5fa807e3fb10a93208ffd212c143851ba16a3e2d59afb9207" exitCode=0 Jan 29 08:14:02 crc kubenswrapper[5017]: I0129 08:14:02.987421 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" event={"ID":"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a","Type":"ContainerDied","Data":"7145d8b0e4cca0e5fa807e3fb10a93208ffd212c143851ba16a3e2d59afb9207"} Jan 29 08:14:03 crc kubenswrapper[5017]: I0129 08:14:03.129635 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-68bd76c56f-gz2sk" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Jan 29 08:14:03 crc kubenswrapper[5017]: I0129 08:14:03.130192 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.407639 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.578415 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-bundle\") pod \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.578555 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq2pj\" (UniqueName: \"kubernetes.io/projected/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-kube-api-access-pq2pj\") pod \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.580220 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-util\") pod \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\" (UID: \"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a\") " Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.581369 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-bundle" (OuterVolumeSpecName: "bundle") pod "43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" (UID: "43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.586645 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-kube-api-access-pq2pj" (OuterVolumeSpecName: "kube-api-access-pq2pj") pod "43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" (UID: "43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a"). InnerVolumeSpecName "kube-api-access-pq2pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.594500 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-util" (OuterVolumeSpecName: "util") pod "43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" (UID: "43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.683044 5017 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.683089 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq2pj\" (UniqueName: \"kubernetes.io/projected/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-kube-api-access-pq2pj\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:04 crc kubenswrapper[5017]: I0129 08:14:04.683101 5017 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a-util\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:05 crc kubenswrapper[5017]: I0129 08:14:05.011089 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" event={"ID":"43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a","Type":"ContainerDied","Data":"b7c5e0473d05cfa929be17a91d1a3bd7ba940af020af087dac43272c224fa537"} Jan 29 08:14:05 crc kubenswrapper[5017]: I0129 08:14:05.011553 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c5e0473d05cfa929be17a91d1a3bd7ba940af020af087dac43272c224fa537" Jan 29 08:14:05 crc kubenswrapper[5017]: I0129 08:14:05.011237 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p" Jan 29 08:14:07 crc kubenswrapper[5017]: I0129 08:14:07.316471 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:14:07 crc kubenswrapper[5017]: E0129 08:14:07.317411 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.708449 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.852830 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-scripts\") pod \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.852901 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-config-data\") pod \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.852977 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd7d0aff-5a86-446e-a1d8-228c27e71a18-horizon-secret-key\") pod \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.853026 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7d0aff-5a86-446e-a1d8-228c27e71a18-logs\") pod \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.853125 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xfst\" (UniqueName: \"kubernetes.io/projected/bd7d0aff-5a86-446e-a1d8-228c27e71a18-kube-api-access-7xfst\") pod \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\" (UID: \"bd7d0aff-5a86-446e-a1d8-228c27e71a18\") " Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.854073 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7d0aff-5a86-446e-a1d8-228c27e71a18-logs" (OuterVolumeSpecName: "logs") pod "bd7d0aff-5a86-446e-a1d8-228c27e71a18" (UID: "bd7d0aff-5a86-446e-a1d8-228c27e71a18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.864387 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7d0aff-5a86-446e-a1d8-228c27e71a18-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bd7d0aff-5a86-446e-a1d8-228c27e71a18" (UID: "bd7d0aff-5a86-446e-a1d8-228c27e71a18"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.878222 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7d0aff-5a86-446e-a1d8-228c27e71a18-kube-api-access-7xfst" (OuterVolumeSpecName: "kube-api-access-7xfst") pod "bd7d0aff-5a86-446e-a1d8-228c27e71a18" (UID: "bd7d0aff-5a86-446e-a1d8-228c27e71a18"). InnerVolumeSpecName "kube-api-access-7xfst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.897293 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-config-data" (OuterVolumeSpecName: "config-data") pod "bd7d0aff-5a86-446e-a1d8-228c27e71a18" (UID: "bd7d0aff-5a86-446e-a1d8-228c27e71a18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.957405 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.957449 5017 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd7d0aff-5a86-446e-a1d8-228c27e71a18-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.957465 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd7d0aff-5a86-446e-a1d8-228c27e71a18-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.957476 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xfst\" (UniqueName: \"kubernetes.io/projected/bd7d0aff-5a86-446e-a1d8-228c27e71a18-kube-api-access-7xfst\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:11 crc kubenswrapper[5017]: I0129 08:14:11.974415 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-scripts" (OuterVolumeSpecName: "scripts") pod "bd7d0aff-5a86-446e-a1d8-228c27e71a18" (UID: "bd7d0aff-5a86-446e-a1d8-228c27e71a18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.059990 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd7d0aff-5a86-446e-a1d8-228c27e71a18-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.099074 5017 generic.go:334] "Generic (PLEG): container finished" podID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerID="6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6" exitCode=137 Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.099126 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bd76c56f-gz2sk" event={"ID":"bd7d0aff-5a86-446e-a1d8-228c27e71a18","Type":"ContainerDied","Data":"6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6"} Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.099161 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68bd76c56f-gz2sk" event={"ID":"bd7d0aff-5a86-446e-a1d8-228c27e71a18","Type":"ContainerDied","Data":"43ab365bcc697de397dd193d8c0d92b6a6d7e1979b3278cc503f0e516ea54fb2"} Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.099180 5017 scope.go:117] "RemoveContainer" containerID="dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66" Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.099328 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68bd76c56f-gz2sk" Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.144582 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68bd76c56f-gz2sk"] Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.156900 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68bd76c56f-gz2sk"] Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.290469 5017 scope.go:117] "RemoveContainer" containerID="6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6" Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.336477 5017 scope.go:117] "RemoveContainer" containerID="dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66" Jan 29 08:14:12 crc kubenswrapper[5017]: E0129 08:14:12.337166 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66\": container with ID starting with dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66 not found: ID does not exist" containerID="dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66" Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.337199 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66"} err="failed to get container status \"dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66\": rpc error: code = NotFound desc = could not find container \"dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66\": container with ID starting with dfc3572ec15b1d39fe0e8738d18745138fbd4e56923d84fb5814f69d0493fb66 not found: ID does not exist" Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.337226 5017 scope.go:117] "RemoveContainer" containerID="6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6" Jan 29 08:14:12 crc kubenswrapper[5017]: E0129 08:14:12.337589 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6\": container with ID starting with 6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6 not found: ID does not exist" containerID="6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6" Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.337611 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6"} err="failed to get container status \"6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6\": rpc error: code = NotFound desc = could not find container \"6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6\": container with ID starting with 6f0910ff19d91422ef36dc105ddf48f0cde96d83ab489b4185785fbf9233bde6 not found: ID does not exist" Jan 29 08:14:12 crc kubenswrapper[5017]: I0129 08:14:12.339803 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" path="/var/lib/kubelet/pods/bd7d0aff-5a86-446e-a1d8-228c27e71a18/volumes" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.256203 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-84bts"] Jan 29 08:14:17 crc kubenswrapper[5017]: E0129 08:14:17.257674 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerName="extract" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.257695 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerName="extract" Jan 29 08:14:17 crc kubenswrapper[5017]: E0129 08:14:17.257746 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerName="util" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.257756 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerName="util" Jan 29 08:14:17 crc kubenswrapper[5017]: E0129 08:14:17.257832 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon-log" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.257841 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon-log" Jan 29 08:14:17 crc kubenswrapper[5017]: E0129 08:14:17.257855 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerName="pull" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.257860 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerName="pull" Jan 29 08:14:17 crc kubenswrapper[5017]: E0129 08:14:17.257871 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.257878 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.258061 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a" containerName="extract" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.258078 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon-log" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.258105 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7d0aff-5a86-446e-a1d8-228c27e71a18" containerName="horizon" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.259040 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-84bts" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.261532 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-bt6vj" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.261836 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.264103 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.279545 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-84bts"] Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.294713 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr"] Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.296470 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.299814 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-c6p4k" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.300686 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.321847 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm"] Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.323815 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.336919 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsz7\" (UniqueName: \"kubernetes.io/projected/dced126a-1d49-4fe1-a610-32145372c814-kube-api-access-hxsz7\") pod \"obo-prometheus-operator-68bc856cb9-84bts\" (UID: \"dced126a-1d49-4fe1-a610-32145372c814\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-84bts" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.337557 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr"] Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.366939 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm"] Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.439520 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5cd374b-6395-40e3-80fb-2ce7f3f9c001-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-8qncr\" (UID: \"b5cd374b-6395-40e3-80fb-2ce7f3f9c001\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.439901 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3dc25106-c3d9-46c1-9d93-3407ca7dedbd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-jcftm\" (UID: \"3dc25106-c3d9-46c1-9d93-3407ca7dedbd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.440087 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxsz7\" (UniqueName: \"kubernetes.io/projected/dced126a-1d49-4fe1-a610-32145372c814-kube-api-access-hxsz7\") pod \"obo-prometheus-operator-68bc856cb9-84bts\" (UID: \"dced126a-1d49-4fe1-a610-32145372c814\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-84bts" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.440206 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3dc25106-c3d9-46c1-9d93-3407ca7dedbd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-jcftm\" (UID: \"3dc25106-c3d9-46c1-9d93-3407ca7dedbd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.440313 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5cd374b-6395-40e3-80fb-2ce7f3f9c001-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-8qncr\" (UID: \"b5cd374b-6395-40e3-80fb-2ce7f3f9c001\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.466675 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxsz7\" (UniqueName: \"kubernetes.io/projected/dced126a-1d49-4fe1-a610-32145372c814-kube-api-access-hxsz7\") pod \"obo-prometheus-operator-68bc856cb9-84bts\" (UID: \"dced126a-1d49-4fe1-a610-32145372c814\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-84bts" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.470303 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-drf5r"] Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.477764 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.486015 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-kxhzw" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.486456 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.560230 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5cd374b-6395-40e3-80fb-2ce7f3f9c001-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-8qncr\" (UID: \"b5cd374b-6395-40e3-80fb-2ce7f3f9c001\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.560421 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3dc25106-c3d9-46c1-9d93-3407ca7dedbd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-jcftm\" (UID: \"3dc25106-c3d9-46c1-9d93-3407ca7dedbd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.560592 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3dc25106-c3d9-46c1-9d93-3407ca7dedbd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-jcftm\" (UID: \"3dc25106-c3d9-46c1-9d93-3407ca7dedbd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.560674 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5cd374b-6395-40e3-80fb-2ce7f3f9c001-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-8qncr\" (UID: \"b5cd374b-6395-40e3-80fb-2ce7f3f9c001\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.578707 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3dc25106-c3d9-46c1-9d93-3407ca7dedbd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-jcftm\" (UID: \"3dc25106-c3d9-46c1-9d93-3407ca7dedbd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.579606 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5cd374b-6395-40e3-80fb-2ce7f3f9c001-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-8qncr\" (UID: \"b5cd374b-6395-40e3-80fb-2ce7f3f9c001\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.573065 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-drf5r"] Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.593134 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3dc25106-c3d9-46c1-9d93-3407ca7dedbd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-jcftm\" (UID: \"3dc25106-c3d9-46c1-9d93-3407ca7dedbd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.605310 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5cd374b-6395-40e3-80fb-2ce7f3f9c001-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f7867b468-8qncr\" (UID: \"b5cd374b-6395-40e3-80fb-2ce7f3f9c001\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.635814 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-84bts" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.654112 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.663529 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vccw\" (UniqueName: \"kubernetes.io/projected/c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16-kube-api-access-8vccw\") pod \"observability-operator-59bdc8b94-drf5r\" (UID: \"c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16\") " pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.663694 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16-observability-operator-tls\") pod \"observability-operator-59bdc8b94-drf5r\" (UID: \"c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16\") " pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.665099 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.766703 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vccw\" (UniqueName: \"kubernetes.io/projected/c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16-kube-api-access-8vccw\") pod \"observability-operator-59bdc8b94-drf5r\" (UID: \"c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16\") " pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.766892 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16-observability-operator-tls\") pod \"observability-operator-59bdc8b94-drf5r\" (UID: \"c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16\") " pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.786777 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16-observability-operator-tls\") pod \"observability-operator-59bdc8b94-drf5r\" (UID: \"c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16\") " pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.817149 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-rvsnq"] Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.826753 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vccw\" (UniqueName: \"kubernetes.io/projected/c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16-kube-api-access-8vccw\") pod \"observability-operator-59bdc8b94-drf5r\" (UID: \"c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16\") " pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.838941 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.849394 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-8dlwk" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.855761 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-rvsnq"] Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.882742 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.979517 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkm8j\" (UniqueName: \"kubernetes.io/projected/cecf03f2-56cf-41cd-a5e5-0a99d4c0784f-kube-api-access-kkm8j\") pod \"perses-operator-5bf474d74f-rvsnq\" (UID: \"cecf03f2-56cf-41cd-a5e5-0a99d4c0784f\") " pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:17 crc kubenswrapper[5017]: I0129 08:14:17.979911 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/cecf03f2-56cf-41cd-a5e5-0a99d4c0784f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-rvsnq\" (UID: \"cecf03f2-56cf-41cd-a5e5-0a99d4c0784f\") " pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:18 crc kubenswrapper[5017]: I0129 08:14:18.083511 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/cecf03f2-56cf-41cd-a5e5-0a99d4c0784f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-rvsnq\" (UID: \"cecf03f2-56cf-41cd-a5e5-0a99d4c0784f\") " pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:18 crc kubenswrapper[5017]: I0129 08:14:18.084172 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkm8j\" (UniqueName: \"kubernetes.io/projected/cecf03f2-56cf-41cd-a5e5-0a99d4c0784f-kube-api-access-kkm8j\") pod \"perses-operator-5bf474d74f-rvsnq\" (UID: \"cecf03f2-56cf-41cd-a5e5-0a99d4c0784f\") " pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:18 crc kubenswrapper[5017]: I0129 08:14:18.087674 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/cecf03f2-56cf-41cd-a5e5-0a99d4c0784f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-rvsnq\" (UID: \"cecf03f2-56cf-41cd-a5e5-0a99d4c0784f\") " pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:18 crc kubenswrapper[5017]: I0129 08:14:18.139809 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkm8j\" (UniqueName: \"kubernetes.io/projected/cecf03f2-56cf-41cd-a5e5-0a99d4c0784f-kube-api-access-kkm8j\") pod \"perses-operator-5bf474d74f-rvsnq\" (UID: \"cecf03f2-56cf-41cd-a5e5-0a99d4c0784f\") " pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:18 crc kubenswrapper[5017]: I0129 08:14:18.279590 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:18 crc kubenswrapper[5017]: I0129 08:14:18.639418 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm"] Jan 29 08:14:18 crc kubenswrapper[5017]: I0129 08:14:18.884724 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr"] Jan 29 08:14:18 crc kubenswrapper[5017]: I0129 08:14:18.939902 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-84bts"] Jan 29 08:14:19 crc kubenswrapper[5017]: I0129 08:14:19.081779 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-drf5r"] Jan 29 08:14:19 crc kubenswrapper[5017]: W0129 08:14:19.085950 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4ed50de_cf5f_4bc5_9e0c_8d696c49fe16.slice/crio-25d559245e702ea183999db7e51b055d6f09a3aca885c74847f9a9eb155a1181 WatchSource:0}: Error finding container 25d559245e702ea183999db7e51b055d6f09a3aca885c74847f9a9eb155a1181: Status 404 returned error can't find the container with id 25d559245e702ea183999db7e51b055d6f09a3aca885c74847f9a9eb155a1181 Jan 29 08:14:19 crc kubenswrapper[5017]: W0129 08:14:19.112440 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcecf03f2_56cf_41cd_a5e5_0a99d4c0784f.slice/crio-e0d223639b81d556dcef802155c22543a67e9c764c1a8fc1f38a18ec69ee04a9 WatchSource:0}: Error finding container e0d223639b81d556dcef802155c22543a67e9c764c1a8fc1f38a18ec69ee04a9: Status 404 returned error can't find the container with id e0d223639b81d556dcef802155c22543a67e9c764c1a8fc1f38a18ec69ee04a9 Jan 29 08:14:19 crc kubenswrapper[5017]: I0129 08:14:19.121173 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-rvsnq"] Jan 29 08:14:19 crc kubenswrapper[5017]: I0129 08:14:19.190621 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-84bts" event={"ID":"dced126a-1d49-4fe1-a610-32145372c814","Type":"ContainerStarted","Data":"7ebe9e78b4aa055e8a66440a77a90ea76af5540b057b82bf1ba6a700492cab8f"} Jan 29 08:14:19 crc kubenswrapper[5017]: I0129 08:14:19.192371 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" event={"ID":"cecf03f2-56cf-41cd-a5e5-0a99d4c0784f","Type":"ContainerStarted","Data":"e0d223639b81d556dcef802155c22543a67e9c764c1a8fc1f38a18ec69ee04a9"} Jan 29 08:14:19 crc kubenswrapper[5017]: I0129 08:14:19.194152 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-drf5r" event={"ID":"c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16","Type":"ContainerStarted","Data":"25d559245e702ea183999db7e51b055d6f09a3aca885c74847f9a9eb155a1181"} Jan 29 08:14:19 crc kubenswrapper[5017]: I0129 08:14:19.195613 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" event={"ID":"3dc25106-c3d9-46c1-9d93-3407ca7dedbd","Type":"ContainerStarted","Data":"562d27a3d9fc24311de721f3ceb5e92460e2a5f0c01746d73f8c883bf78bc40d"} Jan 29 08:14:19 crc kubenswrapper[5017]: I0129 08:14:19.196997 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" event={"ID":"b5cd374b-6395-40e3-80fb-2ce7f3f9c001","Type":"ContainerStarted","Data":"e0df4e734a9c2101c9654269dbb9c9fc3b9425a46f29565ab8a9367b01bf4f22"} Jan 29 08:14:20 crc kubenswrapper[5017]: I0129 08:14:20.316389 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:14:20 crc kubenswrapper[5017]: E0129 08:14:20.317149 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:14:25 crc kubenswrapper[5017]: I0129 08:14:25.295766 5017 scope.go:117] "RemoveContainer" containerID="c041202c0a44d442a6d91b733dda5b71b6fcb3cbbd02753674830b13dc6ecc70" Jan 29 08:14:31 crc kubenswrapper[5017]: I0129 08:14:31.940920 5017 scope.go:117] "RemoveContainer" containerID="a09c25f576a8313a628ebe6bdd676faab91b69281b091517a762bc6ad8c45c0b" Jan 29 08:14:32 crc kubenswrapper[5017]: I0129 08:14:32.033087 5017 scope.go:117] "RemoveContainer" containerID="4e51714d7510f6d84cfd2d03f866c97c8b20ddfed7b753c95a9f8602689949e7" Jan 29 08:14:32 crc kubenswrapper[5017]: I0129 08:14:32.446010 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" event={"ID":"cecf03f2-56cf-41cd-a5e5-0a99d4c0784f","Type":"ContainerStarted","Data":"b53152c20f0589d7d30a277fea346817472d2296eb73e41f6477ca81d13f853e"} Jan 29 08:14:32 crc kubenswrapper[5017]: I0129 08:14:32.448106 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:32 crc kubenswrapper[5017]: I0129 08:14:32.515330 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" podStartSLOduration=2.600921273 podStartE2EDuration="15.515304589s" podCreationTimestamp="2026-01-29 08:14:17 +0000 UTC" firstStartedPulling="2026-01-29 08:14:19.122003435 +0000 UTC m=+5945.496451045" lastFinishedPulling="2026-01-29 08:14:32.036386751 +0000 UTC m=+5958.410834361" observedRunningTime="2026-01-29 08:14:32.496290903 +0000 UTC m=+5958.870738513" watchObservedRunningTime="2026-01-29 08:14:32.515304589 +0000 UTC m=+5958.889752199" Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.317154 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:14:33 crc kubenswrapper[5017]: E0129 08:14:33.318220 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.459091 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-84bts" event={"ID":"dced126a-1d49-4fe1-a610-32145372c814","Type":"ContainerStarted","Data":"005d84f27b18e3dc60a6e962527f6e8b6055ba83734b00e5b2870539a304796b"} Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.462094 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-drf5r" event={"ID":"c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16","Type":"ContainerStarted","Data":"0c03e9cbed4280add74f768e145294122dd8e937e85aa3fb37bf939f0c17a366"} Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.462494 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.467257 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" event={"ID":"3dc25106-c3d9-46c1-9d93-3407ca7dedbd","Type":"ContainerStarted","Data":"a573b4c59948361b9d9df146212efc6b0c49daeaa421178d768abca66fda91a0"} Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.469671 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" event={"ID":"b5cd374b-6395-40e3-80fb-2ce7f3f9c001","Type":"ContainerStarted","Data":"fc140527ddd2bfe81b2dcd614371b0ab9e4a1bb4322a6a50a1539f15221e8b39"} Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.487470 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-drf5r" Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.488896 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-84bts" podStartSLOduration=3.415532551 podStartE2EDuration="16.488860804s" podCreationTimestamp="2026-01-29 08:14:17 +0000 UTC" firstStartedPulling="2026-01-29 08:14:18.961252685 +0000 UTC m=+5945.335700295" lastFinishedPulling="2026-01-29 08:14:32.034580938 +0000 UTC m=+5958.409028548" observedRunningTime="2026-01-29 08:14:33.47873941 +0000 UTC m=+5959.853187020" watchObservedRunningTime="2026-01-29 08:14:33.488860804 +0000 UTC m=+5959.863308414" Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.509195 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-8qncr" podStartSLOduration=3.377349604 podStartE2EDuration="16.509166491s" podCreationTimestamp="2026-01-29 08:14:17 +0000 UTC" firstStartedPulling="2026-01-29 08:14:18.915159049 +0000 UTC m=+5945.289606659" lastFinishedPulling="2026-01-29 08:14:32.046975936 +0000 UTC m=+5958.421423546" observedRunningTime="2026-01-29 08:14:33.501581529 +0000 UTC m=+5959.876029149" watchObservedRunningTime="2026-01-29 08:14:33.509166491 +0000 UTC m=+5959.883614111" Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.554262 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-drf5r" podStartSLOduration=3.608280627 podStartE2EDuration="16.554223682s" podCreationTimestamp="2026-01-29 08:14:17 +0000 UTC" firstStartedPulling="2026-01-29 08:14:19.089633777 +0000 UTC m=+5945.464081387" lastFinishedPulling="2026-01-29 08:14:32.035576832 +0000 UTC m=+5958.410024442" observedRunningTime="2026-01-29 08:14:33.545661067 +0000 UTC m=+5959.920108677" watchObservedRunningTime="2026-01-29 08:14:33.554223682 +0000 UTC m=+5959.928671292" Jan 29 08:14:33 crc kubenswrapper[5017]: I0129 08:14:33.626249 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f7867b468-jcftm" podStartSLOduration=3.286939014 podStartE2EDuration="16.626224772s" podCreationTimestamp="2026-01-29 08:14:17 +0000 UTC" firstStartedPulling="2026-01-29 08:14:18.694359717 +0000 UTC m=+5945.068807327" lastFinishedPulling="2026-01-29 08:14:32.033645475 +0000 UTC m=+5958.408093085" observedRunningTime="2026-01-29 08:14:33.585072983 +0000 UTC m=+5959.959520613" watchObservedRunningTime="2026-01-29 08:14:33.626224772 +0000 UTC m=+5960.000672382" Jan 29 08:14:36 crc kubenswrapper[5017]: I0129 08:14:36.034817 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pscn6"] Jan 29 08:14:36 crc kubenswrapper[5017]: I0129 08:14:36.051084 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3494-account-create-update-j4dxd"] Jan 29 08:14:36 crc kubenswrapper[5017]: I0129 08:14:36.066076 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3494-account-create-update-j4dxd"] Jan 29 08:14:36 crc kubenswrapper[5017]: I0129 08:14:36.083727 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pscn6"] Jan 29 08:14:36 crc kubenswrapper[5017]: I0129 08:14:36.337978 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e77bce8-2efe-4f18-b9df-6cd6e7d36e31" path="/var/lib/kubelet/pods/6e77bce8-2efe-4f18-b9df-6cd6e7d36e31/volumes" Jan 29 08:14:36 crc kubenswrapper[5017]: I0129 08:14:36.340638 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792" path="/var/lib/kubelet/pods/b4f7ce1c-2e5b-4d84-a3bd-8e6ffdbda792/volumes" Jan 29 08:14:38 crc kubenswrapper[5017]: I0129 08:14:38.283534 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-rvsnq" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.619981 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.621130 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="953f3dbb-a423-4244-a833-a876051cb0d2" containerName="openstackclient" containerID="cri-o://c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908" gracePeriod=2 Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.630269 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.689354 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 08:14:41 crc kubenswrapper[5017]: E0129 08:14:41.689905 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953f3dbb-a423-4244-a833-a876051cb0d2" containerName="openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.689943 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="953f3dbb-a423-4244-a833-a876051cb0d2" containerName="openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.690197 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="953f3dbb-a423-4244-a833-a876051cb0d2" containerName="openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.690973 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.744047 5017 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="953f3dbb-a423-4244-a833-a876051cb0d2" podUID="caee0a70-c87e-4b2d-b9ca-8f949b81540e" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.762218 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.864970 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/caee0a70-c87e-4b2d-b9ca-8f949b81540e-openstack-config-secret\") pod \"openstackclient\" (UID: \"caee0a70-c87e-4b2d-b9ca-8f949b81540e\") " pod="openstack/openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.865055 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/caee0a70-c87e-4b2d-b9ca-8f949b81540e-openstack-config\") pod \"openstackclient\" (UID: \"caee0a70-c87e-4b2d-b9ca-8f949b81540e\") " pod="openstack/openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.865093 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws77v\" (UniqueName: \"kubernetes.io/projected/caee0a70-c87e-4b2d-b9ca-8f949b81540e-kube-api-access-ws77v\") pod \"openstackclient\" (UID: \"caee0a70-c87e-4b2d-b9ca-8f949b81540e\") " pod="openstack/openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.967774 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/caee0a70-c87e-4b2d-b9ca-8f949b81540e-openstack-config-secret\") pod \"openstackclient\" (UID: \"caee0a70-c87e-4b2d-b9ca-8f949b81540e\") " pod="openstack/openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.967839 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/caee0a70-c87e-4b2d-b9ca-8f949b81540e-openstack-config\") pod \"openstackclient\" (UID: \"caee0a70-c87e-4b2d-b9ca-8f949b81540e\") " pod="openstack/openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.967865 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws77v\" (UniqueName: \"kubernetes.io/projected/caee0a70-c87e-4b2d-b9ca-8f949b81540e-kube-api-access-ws77v\") pod \"openstackclient\" (UID: \"caee0a70-c87e-4b2d-b9ca-8f949b81540e\") " pod="openstack/openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.969129 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/caee0a70-c87e-4b2d-b9ca-8f949b81540e-openstack-config\") pod \"openstackclient\" (UID: \"caee0a70-c87e-4b2d-b9ca-8f949b81540e\") " pod="openstack/openstackclient" Jan 29 08:14:41 crc kubenswrapper[5017]: I0129 08:14:41.988706 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/caee0a70-c87e-4b2d-b9ca-8f949b81540e-openstack-config-secret\") pod \"openstackclient\" (UID: \"caee0a70-c87e-4b2d-b9ca-8f949b81540e\") " pod="openstack/openstackclient" Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.038731 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws77v\" (UniqueName: \"kubernetes.io/projected/caee0a70-c87e-4b2d-b9ca-8f949b81540e-kube-api-access-ws77v\") pod \"openstackclient\" (UID: \"caee0a70-c87e-4b2d-b9ca-8f949b81540e\") " pod="openstack/openstackclient" Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.046716 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.048248 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.054460 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-phgvs" Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.073879 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.113021 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.172593 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djd4p\" (UniqueName: \"kubernetes.io/projected/f9ca9b2d-948b-412b-acf5-c98bd249d35c-kube-api-access-djd4p\") pod \"kube-state-metrics-0\" (UID: \"f9ca9b2d-948b-412b-acf5-c98bd249d35c\") " pod="openstack/kube-state-metrics-0" Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.275320 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djd4p\" (UniqueName: \"kubernetes.io/projected/f9ca9b2d-948b-412b-acf5-c98bd249d35c-kube-api-access-djd4p\") pod \"kube-state-metrics-0\" (UID: \"f9ca9b2d-948b-412b-acf5-c98bd249d35c\") " pod="openstack/kube-state-metrics-0" Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.386099 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djd4p\" (UniqueName: \"kubernetes.io/projected/f9ca9b2d-948b-412b-acf5-c98bd249d35c-kube-api-access-djd4p\") pod \"kube-state-metrics-0\" (UID: \"f9ca9b2d-948b-412b-acf5-c98bd249d35c\") " pod="openstack/kube-state-metrics-0" Jan 29 08:14:42 crc kubenswrapper[5017]: I0129 08:14:42.421584 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.207548 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.210881 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.227131 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.227306 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.227495 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-whhjn" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.227599 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.227722 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.261314 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.359826 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/094002d7-d2d3-486f-af00-22a69e977e40-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.359886 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/094002d7-d2d3-486f-af00-22a69e977e40-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.359906 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/094002d7-d2d3-486f-af00-22a69e977e40-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.359942 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jdx6\" (UniqueName: \"kubernetes.io/projected/094002d7-d2d3-486f-af00-22a69e977e40-kube-api-access-9jdx6\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.360088 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/094002d7-d2d3-486f-af00-22a69e977e40-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.360138 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/094002d7-d2d3-486f-af00-22a69e977e40-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.360163 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/094002d7-d2d3-486f-af00-22a69e977e40-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.462233 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/094002d7-d2d3-486f-af00-22a69e977e40-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.463582 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/094002d7-d2d3-486f-af00-22a69e977e40-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.463609 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/094002d7-d2d3-486f-af00-22a69e977e40-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.463673 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jdx6\" (UniqueName: \"kubernetes.io/projected/094002d7-d2d3-486f-af00-22a69e977e40-kube-api-access-9jdx6\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.463981 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/094002d7-d2d3-486f-af00-22a69e977e40-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.464101 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/094002d7-d2d3-486f-af00-22a69e977e40-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.464149 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/094002d7-d2d3-486f-af00-22a69e977e40-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.465456 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/094002d7-d2d3-486f-af00-22a69e977e40-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.490258 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/094002d7-d2d3-486f-af00-22a69e977e40-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.490804 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/094002d7-d2d3-486f-af00-22a69e977e40-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.494522 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/094002d7-d2d3-486f-af00-22a69e977e40-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.532772 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/094002d7-d2d3-486f-af00-22a69e977e40-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.544870 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/094002d7-d2d3-486f-af00-22a69e977e40-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.550873 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jdx6\" (UniqueName: \"kubernetes.io/projected/094002d7-d2d3-486f-af00-22a69e977e40-kube-api-access-9jdx6\") pod \"alertmanager-metric-storage-0\" (UID: \"094002d7-d2d3-486f-af00-22a69e977e40\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.587127 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: W0129 08:14:43.714512 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaee0a70_c87e_4b2d_b9ca_8f949b81540e.slice/crio-9b6e539640556ed154f472256e98f333f357659f3684c269b7abd28810cc6ee6 WatchSource:0}: Error finding container 9b6e539640556ed154f472256e98f333f357659f3684c269b7abd28810cc6ee6: Status 404 returned error can't find the container with id 9b6e539640556ed154f472256e98f333f357659f3684c269b7abd28810cc6ee6 Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.731150 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.746989 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.753155 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.761046 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.762532 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.762686 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.762990 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.763038 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.763143 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.763223 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.763377 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.763502 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-5l2hr" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.794356 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.808838 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-14bcb761-bdc4-4a25-81a0-5ed4625950e3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14bcb761-bdc4-4a25-81a0-5ed4625950e3\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.815142 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c02aa22b-1d85-4478-85b3-1b929165d41c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.815205 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c02aa22b-1d85-4478-85b3-1b929165d41c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.815339 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c02aa22b-1d85-4478-85b3-1b929165d41c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.815474 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c02aa22b-1d85-4478-85b3-1b929165d41c-config\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.815523 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c02aa22b-1d85-4478-85b3-1b929165d41c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.815588 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c02aa22b-1d85-4478-85b3-1b929165d41c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.815625 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nfpm\" (UniqueName: \"kubernetes.io/projected/c02aa22b-1d85-4478-85b3-1b929165d41c-kube-api-access-6nfpm\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.815949 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c02aa22b-1d85-4478-85b3-1b929165d41c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.816119 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c02aa22b-1d85-4478-85b3-1b929165d41c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.917803 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c02aa22b-1d85-4478-85b3-1b929165d41c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.918368 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c02aa22b-1d85-4478-85b3-1b929165d41c-config\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.918399 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c02aa22b-1d85-4478-85b3-1b929165d41c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.918427 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c02aa22b-1d85-4478-85b3-1b929165d41c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.918450 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nfpm\" (UniqueName: \"kubernetes.io/projected/c02aa22b-1d85-4478-85b3-1b929165d41c-kube-api-access-6nfpm\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.918509 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c02aa22b-1d85-4478-85b3-1b929165d41c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.918533 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c02aa22b-1d85-4478-85b3-1b929165d41c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.918598 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-14bcb761-bdc4-4a25-81a0-5ed4625950e3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14bcb761-bdc4-4a25-81a0-5ed4625950e3\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.918626 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c02aa22b-1d85-4478-85b3-1b929165d41c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.918647 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c02aa22b-1d85-4478-85b3-1b929165d41c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.921411 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c02aa22b-1d85-4478-85b3-1b929165d41c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.921817 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c02aa22b-1d85-4478-85b3-1b929165d41c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.922396 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c02aa22b-1d85-4478-85b3-1b929165d41c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.933798 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c02aa22b-1d85-4478-85b3-1b929165d41c-config\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.934363 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c02aa22b-1d85-4478-85b3-1b929165d41c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.934400 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c02aa22b-1d85-4478-85b3-1b929165d41c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.935894 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c02aa22b-1d85-4478-85b3-1b929165d41c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.940245 5017 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.940301 5017 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-14bcb761-bdc4-4a25-81a0-5ed4625950e3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14bcb761-bdc4-4a25-81a0-5ed4625950e3\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fd2f38dcf0821026ee193af68a57548da59ce8424aee8934d6ae0c3d7c0cee08/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.948594 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c02aa22b-1d85-4478-85b3-1b929165d41c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:43 crc kubenswrapper[5017]: I0129 08:14:43.971323 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nfpm\" (UniqueName: \"kubernetes.io/projected/c02aa22b-1d85-4478-85b3-1b929165d41c-kube-api-access-6nfpm\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.058704 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-14bcb761-bdc4-4a25-81a0-5ed4625950e3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14bcb761-bdc4-4a25-81a0-5ed4625950e3\") pod \"prometheus-metric-storage-0\" (UID: \"c02aa22b-1d85-4478-85b3-1b929165d41c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.131972 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.308880 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.674037 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.680504 5017 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="953f3dbb-a423-4244-a833-a876051cb0d2" podUID="caee0a70-c87e-4b2d-b9ca-8f949b81540e" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.745235 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config-secret\") pod \"953f3dbb-a423-4244-a833-a876051cb0d2\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.746498 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s44dh\" (UniqueName: \"kubernetes.io/projected/953f3dbb-a423-4244-a833-a876051cb0d2-kube-api-access-s44dh\") pod \"953f3dbb-a423-4244-a833-a876051cb0d2\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.746772 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config\") pod \"953f3dbb-a423-4244-a833-a876051cb0d2\" (UID: \"953f3dbb-a423-4244-a833-a876051cb0d2\") " Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.747723 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"caee0a70-c87e-4b2d-b9ca-8f949b81540e","Type":"ContainerStarted","Data":"07c2453b366cf197f68e9ec417ad461b21764fb371dd72d34774c3edaa110dd7"} Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.747772 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"caee0a70-c87e-4b2d-b9ca-8f949b81540e","Type":"ContainerStarted","Data":"9b6e539640556ed154f472256e98f333f357659f3684c269b7abd28810cc6ee6"} Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.754023 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9ca9b2d-948b-412b-acf5-c98bd249d35c","Type":"ContainerStarted","Data":"4f7091d033c253a813a83f395f129e0987d74e60638122b515fd3a861f10fc38"} Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.754080 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9ca9b2d-948b-412b-acf5-c98bd249d35c","Type":"ContainerStarted","Data":"d942dde90f51c64540dc27d1a519ba39d800a4061bd62f42fcd91e1cf3fa59ac"} Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.754452 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.755349 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953f3dbb-a423-4244-a833-a876051cb0d2-kube-api-access-s44dh" (OuterVolumeSpecName: "kube-api-access-s44dh") pod "953f3dbb-a423-4244-a833-a876051cb0d2" (UID: "953f3dbb-a423-4244-a833-a876051cb0d2"). InnerVolumeSpecName "kube-api-access-s44dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.765098 5017 generic.go:334] "Generic (PLEG): container finished" podID="953f3dbb-a423-4244-a833-a876051cb0d2" containerID="c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908" exitCode=137 Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.765214 5017 scope.go:117] "RemoveContainer" containerID="c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.765369 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.772336 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"094002d7-d2d3-486f-af00-22a69e977e40","Type":"ContainerStarted","Data":"d9ff0d16db40c72ca84cfb249356881cae1dfb833570eb3c9b7f4ca0315f7f51"} Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.777990 5017 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="953f3dbb-a423-4244-a833-a876051cb0d2" podUID="caee0a70-c87e-4b2d-b9ca-8f949b81540e" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.793929 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.793902623 podStartE2EDuration="3.793902623s" podCreationTimestamp="2026-01-29 08:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:14:44.772901578 +0000 UTC m=+5971.147349208" watchObservedRunningTime="2026-01-29 08:14:44.793902623 +0000 UTC m=+5971.168350233" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.811633 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.288420216 podStartE2EDuration="3.811379042s" podCreationTimestamp="2026-01-29 08:14:41 +0000 UTC" firstStartedPulling="2026-01-29 08:14:43.825442751 +0000 UTC m=+5970.199890361" lastFinishedPulling="2026-01-29 08:14:44.348401577 +0000 UTC m=+5970.722849187" observedRunningTime="2026-01-29 08:14:44.79377922 +0000 UTC m=+5971.168226840" watchObservedRunningTime="2026-01-29 08:14:44.811379042 +0000 UTC m=+5971.185826652" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.837530 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "953f3dbb-a423-4244-a833-a876051cb0d2" (UID: "953f3dbb-a423-4244-a833-a876051cb0d2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.845430 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "953f3dbb-a423-4244-a833-a876051cb0d2" (UID: "953f3dbb-a423-4244-a833-a876051cb0d2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.851633 5017 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.851678 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s44dh\" (UniqueName: \"kubernetes.io/projected/953f3dbb-a423-4244-a833-a876051cb0d2-kube-api-access-s44dh\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.851688 5017 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/953f3dbb-a423-4244-a833-a876051cb0d2-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.853105 5017 scope.go:117] "RemoveContainer" containerID="c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.873797 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:14:44 crc kubenswrapper[5017]: E0129 08:14:44.875980 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908\": container with ID starting with c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908 not found: ID does not exist" containerID="c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908" Jan 29 08:14:44 crc kubenswrapper[5017]: I0129 08:14:44.879915 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908"} err="failed to get container status \"c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908\": rpc error: code = NotFound desc = could not find container \"c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908\": container with ID starting with c2aaec687ce597b2dbbb430cd9a948a22c08d1fa849e8ae385cf3b9741599908 not found: ID does not exist" Jan 29 08:14:44 crc kubenswrapper[5017]: W0129 08:14:44.892524 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc02aa22b_1d85_4478_85b3_1b929165d41c.slice/crio-adf12dd67ff0b4e77c54a503de50a85bd23869a5b22d775708e54b49a740ea42 WatchSource:0}: Error finding container adf12dd67ff0b4e77c54a503de50a85bd23869a5b22d775708e54b49a740ea42: Status 404 returned error can't find the container with id adf12dd67ff0b4e77c54a503de50a85bd23869a5b22d775708e54b49a740ea42 Jan 29 08:14:45 crc kubenswrapper[5017]: I0129 08:14:45.041405 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gmv6z"] Jan 29 08:14:45 crc kubenswrapper[5017]: I0129 08:14:45.057691 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gmv6z"] Jan 29 08:14:45 crc kubenswrapper[5017]: I0129 08:14:45.083330 5017 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="953f3dbb-a423-4244-a833-a876051cb0d2" podUID="caee0a70-c87e-4b2d-b9ca-8f949b81540e" Jan 29 08:14:45 crc kubenswrapper[5017]: I0129 08:14:45.855581 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c02aa22b-1d85-4478-85b3-1b929165d41c","Type":"ContainerStarted","Data":"adf12dd67ff0b4e77c54a503de50a85bd23869a5b22d775708e54b49a740ea42"} Jan 29 08:14:46 crc kubenswrapper[5017]: I0129 08:14:46.339980 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953f3dbb-a423-4244-a833-a876051cb0d2" path="/var/lib/kubelet/pods/953f3dbb-a423-4244-a833-a876051cb0d2/volumes" Jan 29 08:14:46 crc kubenswrapper[5017]: I0129 08:14:46.341360 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19bcfb0-b10d-4b27-a6d9-8ca701b3d251" path="/var/lib/kubelet/pods/b19bcfb0-b10d-4b27-a6d9-8ca701b3d251/volumes" Jan 29 08:14:47 crc kubenswrapper[5017]: I0129 08:14:47.317304 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:14:47 crc kubenswrapper[5017]: E0129 08:14:47.319888 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:14:51 crc kubenswrapper[5017]: I0129 08:14:51.928684 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c02aa22b-1d85-4478-85b3-1b929165d41c","Type":"ContainerStarted","Data":"a89e7235d2673a93726978335c24d7c254087f47877ef1435f7664ae8961ac82"} Jan 29 08:14:51 crc kubenswrapper[5017]: I0129 08:14:51.933843 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"094002d7-d2d3-486f-af00-22a69e977e40","Type":"ContainerStarted","Data":"a1b72ece5775dfe90baab3c0cc82a5b3aa9b1ecee6be93db0c4cf78e0b0a728c"} Jan 29 08:14:52 crc kubenswrapper[5017]: I0129 08:14:52.427290 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 08:14:57 crc kubenswrapper[5017]: I0129 08:14:57.999122 5017 generic.go:334] "Generic (PLEG): container finished" podID="c02aa22b-1d85-4478-85b3-1b929165d41c" containerID="a89e7235d2673a93726978335c24d7c254087f47877ef1435f7664ae8961ac82" exitCode=0 Jan 29 08:14:58 crc kubenswrapper[5017]: I0129 08:14:57.999218 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c02aa22b-1d85-4478-85b3-1b929165d41c","Type":"ContainerDied","Data":"a89e7235d2673a93726978335c24d7c254087f47877ef1435f7664ae8961ac82"} Jan 29 08:14:58 crc kubenswrapper[5017]: I0129 08:14:58.002406 5017 generic.go:334] "Generic (PLEG): container finished" podID="094002d7-d2d3-486f-af00-22a69e977e40" containerID="a1b72ece5775dfe90baab3c0cc82a5b3aa9b1ecee6be93db0c4cf78e0b0a728c" exitCode=0 Jan 29 08:14:58 crc kubenswrapper[5017]: I0129 08:14:58.002441 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"094002d7-d2d3-486f-af00-22a69e977e40","Type":"ContainerDied","Data":"a1b72ece5775dfe90baab3c0cc82a5b3aa9b1ecee6be93db0c4cf78e0b0a728c"} Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.160807 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6"] Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.164034 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.166770 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.167928 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.173490 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6"] Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.253440 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b4245b-8fec-40d1-bfca-d395a35a56e0-secret-volume\") pod \"collect-profiles-29494575-d7sv6\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.253493 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvcg4\" (UniqueName: \"kubernetes.io/projected/70b4245b-8fec-40d1-bfca-d395a35a56e0-kube-api-access-dvcg4\") pod \"collect-profiles-29494575-d7sv6\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.253579 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b4245b-8fec-40d1-bfca-d395a35a56e0-config-volume\") pod \"collect-profiles-29494575-d7sv6\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.355675 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b4245b-8fec-40d1-bfca-d395a35a56e0-secret-volume\") pod \"collect-profiles-29494575-d7sv6\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.355745 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvcg4\" (UniqueName: \"kubernetes.io/projected/70b4245b-8fec-40d1-bfca-d395a35a56e0-kube-api-access-dvcg4\") pod \"collect-profiles-29494575-d7sv6\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.355820 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b4245b-8fec-40d1-bfca-d395a35a56e0-config-volume\") pod \"collect-profiles-29494575-d7sv6\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.357098 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b4245b-8fec-40d1-bfca-d395a35a56e0-config-volume\") pod \"collect-profiles-29494575-d7sv6\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.364751 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b4245b-8fec-40d1-bfca-d395a35a56e0-secret-volume\") pod \"collect-profiles-29494575-d7sv6\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.375990 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvcg4\" (UniqueName: \"kubernetes.io/projected/70b4245b-8fec-40d1-bfca-d395a35a56e0-kube-api-access-dvcg4\") pod \"collect-profiles-29494575-d7sv6\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:00 crc kubenswrapper[5017]: I0129 08:15:00.511608 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:01 crc kubenswrapper[5017]: I0129 08:15:01.405231 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6"] Jan 29 08:15:01 crc kubenswrapper[5017]: W0129 08:15:01.416104 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70b4245b_8fec_40d1_bfca_d395a35a56e0.slice/crio-8fd6d7fe14b866ac2f9784e1451f3832d22e9f7c080ec76c41ab070be3fa630e WatchSource:0}: Error finding container 8fd6d7fe14b866ac2f9784e1451f3832d22e9f7c080ec76c41ab070be3fa630e: Status 404 returned error can't find the container with id 8fd6d7fe14b866ac2f9784e1451f3832d22e9f7c080ec76c41ab070be3fa630e Jan 29 08:15:02 crc kubenswrapper[5017]: I0129 08:15:02.071099 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" event={"ID":"70b4245b-8fec-40d1-bfca-d395a35a56e0","Type":"ContainerStarted","Data":"df3f82fb78012465c08d9a51ec6beaa6acb99d576c87ae5f3bb625f8d4eae19e"} Jan 29 08:15:02 crc kubenswrapper[5017]: I0129 08:15:02.072092 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" event={"ID":"70b4245b-8fec-40d1-bfca-d395a35a56e0","Type":"ContainerStarted","Data":"8fd6d7fe14b866ac2f9784e1451f3832d22e9f7c080ec76c41ab070be3fa630e"} Jan 29 08:15:02 crc kubenswrapper[5017]: I0129 08:15:02.074671 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"094002d7-d2d3-486f-af00-22a69e977e40","Type":"ContainerStarted","Data":"90149874e0262b68e34fc3375cef6be64c4da2a756a6fa2046509d54b9d3ae59"} Jan 29 08:15:02 crc kubenswrapper[5017]: I0129 08:15:02.097076 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" podStartSLOduration=2.097047748 podStartE2EDuration="2.097047748s" podCreationTimestamp="2026-01-29 08:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:02.089210849 +0000 UTC m=+5988.463658479" watchObservedRunningTime="2026-01-29 08:15:02.097047748 +0000 UTC m=+5988.471495358" Jan 29 08:15:02 crc kubenswrapper[5017]: I0129 08:15:02.316506 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:15:03 crc kubenswrapper[5017]: I0129 08:15:03.090949 5017 generic.go:334] "Generic (PLEG): container finished" podID="70b4245b-8fec-40d1-bfca-d395a35a56e0" containerID="df3f82fb78012465c08d9a51ec6beaa6acb99d576c87ae5f3bb625f8d4eae19e" exitCode=0 Jan 29 08:15:03 crc kubenswrapper[5017]: I0129 08:15:03.091078 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" event={"ID":"70b4245b-8fec-40d1-bfca-d395a35a56e0","Type":"ContainerDied","Data":"df3f82fb78012465c08d9a51ec6beaa6acb99d576c87ae5f3bb625f8d4eae19e"} Jan 29 08:15:05 crc kubenswrapper[5017]: I0129 08:15:05.131938 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"094002d7-d2d3-486f-af00-22a69e977e40","Type":"ContainerStarted","Data":"4c58078b96d53226646b3a4c9720fbef83c733bce3ef3217fdc3bbddd652606f"} Jan 29 08:15:05 crc kubenswrapper[5017]: I0129 08:15:05.133467 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 29 08:15:05 crc kubenswrapper[5017]: I0129 08:15:05.141342 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 29 08:15:05 crc kubenswrapper[5017]: I0129 08:15:05.170865 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.5844531790000005 podStartE2EDuration="22.170836645s" podCreationTimestamp="2026-01-29 08:14:43 +0000 UTC" firstStartedPulling="2026-01-29 08:14:44.326156333 +0000 UTC m=+5970.700603943" lastFinishedPulling="2026-01-29 08:15:00.912539809 +0000 UTC m=+5987.286987409" observedRunningTime="2026-01-29 08:15:05.163202222 +0000 UTC m=+5991.537649852" watchObservedRunningTime="2026-01-29 08:15:05.170836645 +0000 UTC m=+5991.545284265" Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.458788 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.607830 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b4245b-8fec-40d1-bfca-d395a35a56e0-secret-volume\") pod \"70b4245b-8fec-40d1-bfca-d395a35a56e0\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.608178 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvcg4\" (UniqueName: \"kubernetes.io/projected/70b4245b-8fec-40d1-bfca-d395a35a56e0-kube-api-access-dvcg4\") pod \"70b4245b-8fec-40d1-bfca-d395a35a56e0\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.608295 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b4245b-8fec-40d1-bfca-d395a35a56e0-config-volume\") pod \"70b4245b-8fec-40d1-bfca-d395a35a56e0\" (UID: \"70b4245b-8fec-40d1-bfca-d395a35a56e0\") " Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.609251 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70b4245b-8fec-40d1-bfca-d395a35a56e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "70b4245b-8fec-40d1-bfca-d395a35a56e0" (UID: "70b4245b-8fec-40d1-bfca-d395a35a56e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.615231 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b4245b-8fec-40d1-bfca-d395a35a56e0-kube-api-access-dvcg4" (OuterVolumeSpecName: "kube-api-access-dvcg4") pod "70b4245b-8fec-40d1-bfca-d395a35a56e0" (UID: "70b4245b-8fec-40d1-bfca-d395a35a56e0"). InnerVolumeSpecName "kube-api-access-dvcg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.621395 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b4245b-8fec-40d1-bfca-d395a35a56e0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70b4245b-8fec-40d1-bfca-d395a35a56e0" (UID: "70b4245b-8fec-40d1-bfca-d395a35a56e0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.711342 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b4245b-8fec-40d1-bfca-d395a35a56e0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.711396 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b4245b-8fec-40d1-bfca-d395a35a56e0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:06 crc kubenswrapper[5017]: I0129 08:15:06.711413 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvcg4\" (UniqueName: \"kubernetes.io/projected/70b4245b-8fec-40d1-bfca-d395a35a56e0-kube-api-access-dvcg4\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:07 crc kubenswrapper[5017]: I0129 08:15:07.153451 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" event={"ID":"70b4245b-8fec-40d1-bfca-d395a35a56e0","Type":"ContainerDied","Data":"8fd6d7fe14b866ac2f9784e1451f3832d22e9f7c080ec76c41ab070be3fa630e"} Jan 29 08:15:07 crc kubenswrapper[5017]: I0129 08:15:07.153863 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fd6d7fe14b866ac2f9784e1451f3832d22e9f7c080ec76c41ab070be3fa630e" Jan 29 08:15:07 crc kubenswrapper[5017]: I0129 08:15:07.153586 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6" Jan 29 08:15:07 crc kubenswrapper[5017]: I0129 08:15:07.156097 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c02aa22b-1d85-4478-85b3-1b929165d41c","Type":"ContainerStarted","Data":"edacb9c10157f75a269a5ca969cb22af4d1ffd413a5ef1d7fc54ac5f003ed6ab"} Jan 29 08:15:07 crc kubenswrapper[5017]: I0129 08:15:07.182785 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"b8a6fb445ff6fc4c22fdd66d5c1512d5020f037de980ce43bbd2b23413814ebe"} Jan 29 08:15:07 crc kubenswrapper[5017]: I0129 08:15:07.543734 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5"] Jan 29 08:15:07 crc kubenswrapper[5017]: I0129 08:15:07.553429 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-9czx5"] Jan 29 08:15:08 crc kubenswrapper[5017]: I0129 08:15:08.329280 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f138d5aa-7b32-4869-b1b4-a65a12f430fc" path="/var/lib/kubelet/pods/f138d5aa-7b32-4869-b1b4-a65a12f430fc/volumes" Jan 29 08:15:11 crc kubenswrapper[5017]: I0129 08:15:11.264799 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c02aa22b-1d85-4478-85b3-1b929165d41c","Type":"ContainerStarted","Data":"354f0ba258f95ea5df99e6779701bfdecf9eb8d471cd96978b99db7df6c62af6"} Jan 29 08:15:14 crc kubenswrapper[5017]: I0129 08:15:14.298930 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c02aa22b-1d85-4478-85b3-1b929165d41c","Type":"ContainerStarted","Data":"9443f678cb366cf286bc46551df819295cce8e87e2abdce8703a9d946dc9693e"} Jan 29 08:15:14 crc kubenswrapper[5017]: I0129 08:15:14.344244 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.567465374 podStartE2EDuration="32.344212985s" podCreationTimestamp="2026-01-29 08:14:42 +0000 UTC" firstStartedPulling="2026-01-29 08:14:44.896826903 +0000 UTC m=+5971.271274513" lastFinishedPulling="2026-01-29 08:15:13.673574504 +0000 UTC m=+6000.048022124" observedRunningTime="2026-01-29 08:15:14.331225994 +0000 UTC m=+6000.705673604" watchObservedRunningTime="2026-01-29 08:15:14.344212985 +0000 UTC m=+6000.718660595" Jan 29 08:15:19 crc kubenswrapper[5017]: I0129 08:15:19.036409 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7m4xr"] Jan 29 08:15:19 crc kubenswrapper[5017]: I0129 08:15:19.045617 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-375b-account-create-update-jplg6"] Jan 29 08:15:19 crc kubenswrapper[5017]: I0129 08:15:19.055232 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7m4xr"] Jan 29 08:15:19 crc kubenswrapper[5017]: I0129 08:15:19.066517 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-375b-account-create-update-jplg6"] Jan 29 08:15:19 crc kubenswrapper[5017]: I0129 08:15:19.133045 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.332292 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c7eebcf-3d82-45d6-9975-75a81fc8dad8" path="/var/lib/kubelet/pods/4c7eebcf-3d82-45d6-9975-75a81fc8dad8/volumes" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.334216 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79553a3-6dd6-42d0-a988-dfec53583ae2" path="/var/lib/kubelet/pods/a79553a3-6dd6-42d0-a988-dfec53583ae2/volumes" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.574639 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:15:20 crc kubenswrapper[5017]: E0129 08:15:20.575169 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b4245b-8fec-40d1-bfca-d395a35a56e0" containerName="collect-profiles" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.575190 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b4245b-8fec-40d1-bfca-d395a35a56e0" containerName="collect-profiles" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.575463 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b4245b-8fec-40d1-bfca-d395a35a56e0" containerName="collect-profiles" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.577489 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.582309 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.582797 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.591687 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.671988 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-config-data\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.672072 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.672166 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rb6x\" (UniqueName: \"kubernetes.io/projected/20db9b5e-d8e6-4f73-ae02-fb92246d5557-kube-api-access-2rb6x\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.672233 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-log-httpd\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.672323 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-run-httpd\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.672393 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.672427 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-scripts\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.774581 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-log-httpd\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.774663 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-run-httpd\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.774718 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.774744 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-scripts\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.774791 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-config-data\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.774818 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.774877 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rb6x\" (UniqueName: \"kubernetes.io/projected/20db9b5e-d8e6-4f73-ae02-fb92246d5557-kube-api-access-2rb6x\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.775160 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-log-httpd\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.775520 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-run-httpd\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.783454 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.784606 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-config-data\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.788605 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.803011 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-scripts\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.817100 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rb6x\" (UniqueName: \"kubernetes.io/projected/20db9b5e-d8e6-4f73-ae02-fb92246d5557-kube-api-access-2rb6x\") pod \"ceilometer-0\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " pod="openstack/ceilometer-0" Jan 29 08:15:20 crc kubenswrapper[5017]: I0129 08:15:20.912722 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:15:21 crc kubenswrapper[5017]: I0129 08:15:21.438170 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:15:21 crc kubenswrapper[5017]: W0129 08:15:21.441714 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20db9b5e_d8e6_4f73_ae02_fb92246d5557.slice/crio-997c9435a75a61346742e692c9114e4b8d2f168389b64b334593f1ccfc794b2b WatchSource:0}: Error finding container 997c9435a75a61346742e692c9114e4b8d2f168389b64b334593f1ccfc794b2b: Status 404 returned error can't find the container with id 997c9435a75a61346742e692c9114e4b8d2f168389b64b334593f1ccfc794b2b Jan 29 08:15:22 crc kubenswrapper[5017]: I0129 08:15:22.388435 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerStarted","Data":"b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3"} Jan 29 08:15:22 crc kubenswrapper[5017]: I0129 08:15:22.389432 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerStarted","Data":"997c9435a75a61346742e692c9114e4b8d2f168389b64b334593f1ccfc794b2b"} Jan 29 08:15:24 crc kubenswrapper[5017]: I0129 08:15:24.416339 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerStarted","Data":"d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e"} Jan 29 08:15:25 crc kubenswrapper[5017]: I0129 08:15:25.445665 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerStarted","Data":"4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5"} Jan 29 08:15:26 crc kubenswrapper[5017]: I0129 08:15:26.040637 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jtptj"] Jan 29 08:15:26 crc kubenswrapper[5017]: I0129 08:15:26.056204 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jtptj"] Jan 29 08:15:26 crc kubenswrapper[5017]: I0129 08:15:26.332912 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ff737a-ae98-4539-a127-3543f2b5e31e" path="/var/lib/kubelet/pods/06ff737a-ae98-4539-a127-3543f2b5e31e/volumes" Jan 29 08:15:27 crc kubenswrapper[5017]: I0129 08:15:27.476936 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerStarted","Data":"88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536"} Jan 29 08:15:27 crc kubenswrapper[5017]: I0129 08:15:27.478121 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 08:15:27 crc kubenswrapper[5017]: I0129 08:15:27.503998 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.511534841 podStartE2EDuration="7.503947184s" podCreationTimestamp="2026-01-29 08:15:20 +0000 UTC" firstStartedPulling="2026-01-29 08:15:21.445720413 +0000 UTC m=+6007.820168023" lastFinishedPulling="2026-01-29 08:15:26.438132756 +0000 UTC m=+6012.812580366" observedRunningTime="2026-01-29 08:15:27.499906466 +0000 UTC m=+6013.874354096" watchObservedRunningTime="2026-01-29 08:15:27.503947184 +0000 UTC m=+6013.878394794" Jan 29 08:15:29 crc kubenswrapper[5017]: I0129 08:15:29.133042 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 29 08:15:29 crc kubenswrapper[5017]: I0129 08:15:29.136905 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 29 08:15:29 crc kubenswrapper[5017]: I0129 08:15:29.500600 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.386344 5017 scope.go:117] "RemoveContainer" containerID="08eb5048b0e3d7a0705b53b6a50ecf6bf794d5dfb65fd1e1a1abded4ec4d6f40" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.420642 5017 scope.go:117] "RemoveContainer" containerID="440a13c69bdd26fba6ace80ffe30f3168b17ae7aeba4259a9d5cc462d363690a" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.500030 5017 scope.go:117] "RemoveContainer" containerID="5dda04fed68552f36aa1b3bd678efb7946e5f6f0dffd99a2de8f064e40a65e89" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.546842 5017 scope.go:117] "RemoveContainer" containerID="9f918704b5310072bfa2c0af3192198402a4a13ffd89cfc0b731b5f1a509bc57" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.631622 5017 scope.go:117] "RemoveContainer" containerID="78338d2b9917439c27accb752beb4fb33ecc7e2a21b40e4fe9dea74b772e3b7a" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.685526 5017 scope.go:117] "RemoveContainer" containerID="bb9e5f071bfe8a3c4fa32e0b0b39f9da2fb48c01145525ae4fb5e1d5ba3c5cdb" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.734882 5017 scope.go:117] "RemoveContainer" containerID="be442bb7f7caa1d142e1e0c0c6980c86f751d92c460445419a08c6de49061f8e" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.750290 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-nb7cn"] Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.753299 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.773246 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-nb7cn"] Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.850669 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-5f20-account-create-update-967mb"] Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.852379 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.855222 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.863678 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-5f20-account-create-update-967mb"] Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.912075 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km28t\" (UniqueName: \"kubernetes.io/projected/13184a95-812b-4284-af60-4bd58429a08a-kube-api-access-km28t\") pod \"aodh-db-create-nb7cn\" (UID: \"13184a95-812b-4284-af60-4bd58429a08a\") " pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:32 crc kubenswrapper[5017]: I0129 08:15:32.912145 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13184a95-812b-4284-af60-4bd58429a08a-operator-scripts\") pod \"aodh-db-create-nb7cn\" (UID: \"13184a95-812b-4284-af60-4bd58429a08a\") " pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.014872 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km28t\" (UniqueName: \"kubernetes.io/projected/13184a95-812b-4284-af60-4bd58429a08a-kube-api-access-km28t\") pod \"aodh-db-create-nb7cn\" (UID: \"13184a95-812b-4284-af60-4bd58429a08a\") " pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.015094 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13184a95-812b-4284-af60-4bd58429a08a-operator-scripts\") pod \"aodh-db-create-nb7cn\" (UID: \"13184a95-812b-4284-af60-4bd58429a08a\") " pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.015198 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4113968e-d27d-4a51-841d-7721ffb477ad-operator-scripts\") pod \"aodh-5f20-account-create-update-967mb\" (UID: \"4113968e-d27d-4a51-841d-7721ffb477ad\") " pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.015320 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jwp\" (UniqueName: \"kubernetes.io/projected/4113968e-d27d-4a51-841d-7721ffb477ad-kube-api-access-l9jwp\") pod \"aodh-5f20-account-create-update-967mb\" (UID: \"4113968e-d27d-4a51-841d-7721ffb477ad\") " pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.017305 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13184a95-812b-4284-af60-4bd58429a08a-operator-scripts\") pod \"aodh-db-create-nb7cn\" (UID: \"13184a95-812b-4284-af60-4bd58429a08a\") " pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.035985 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km28t\" (UniqueName: \"kubernetes.io/projected/13184a95-812b-4284-af60-4bd58429a08a-kube-api-access-km28t\") pod \"aodh-db-create-nb7cn\" (UID: \"13184a95-812b-4284-af60-4bd58429a08a\") " pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.073303 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.117390 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4113968e-d27d-4a51-841d-7721ffb477ad-operator-scripts\") pod \"aodh-5f20-account-create-update-967mb\" (UID: \"4113968e-d27d-4a51-841d-7721ffb477ad\") " pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.117483 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9jwp\" (UniqueName: \"kubernetes.io/projected/4113968e-d27d-4a51-841d-7721ffb477ad-kube-api-access-l9jwp\") pod \"aodh-5f20-account-create-update-967mb\" (UID: \"4113968e-d27d-4a51-841d-7721ffb477ad\") " pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.118517 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4113968e-d27d-4a51-841d-7721ffb477ad-operator-scripts\") pod \"aodh-5f20-account-create-update-967mb\" (UID: \"4113968e-d27d-4a51-841d-7721ffb477ad\") " pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.139784 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9jwp\" (UniqueName: \"kubernetes.io/projected/4113968e-d27d-4a51-841d-7721ffb477ad-kube-api-access-l9jwp\") pod \"aodh-5f20-account-create-update-967mb\" (UID: \"4113968e-d27d-4a51-841d-7721ffb477ad\") " pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:33 crc kubenswrapper[5017]: I0129 08:15:33.192260 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:34 crc kubenswrapper[5017]: I0129 08:15:34.219528 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-nb7cn"] Jan 29 08:15:34 crc kubenswrapper[5017]: I0129 08:15:34.234334 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-5f20-account-create-update-967mb"] Jan 29 08:15:34 crc kubenswrapper[5017]: I0129 08:15:34.553971 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nb7cn" event={"ID":"13184a95-812b-4284-af60-4bd58429a08a","Type":"ContainerStarted","Data":"4bee9d54ae6f75e0c11ab77de6406971c610a4dd31a2b630a2cff754b42fc773"} Jan 29 08:15:34 crc kubenswrapper[5017]: I0129 08:15:34.554517 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nb7cn" event={"ID":"13184a95-812b-4284-af60-4bd58429a08a","Type":"ContainerStarted","Data":"12a3e1afe3936e08709ea56867da5bbb63cfaeec6271192a10e80084178d05c2"} Jan 29 08:15:34 crc kubenswrapper[5017]: I0129 08:15:34.558593 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-5f20-account-create-update-967mb" event={"ID":"4113968e-d27d-4a51-841d-7721ffb477ad","Type":"ContainerStarted","Data":"29f6fd19ddfccd3db9376b909ba749e4d4c301d50bfe9d13daaa4ab181411f86"} Jan 29 08:15:34 crc kubenswrapper[5017]: I0129 08:15:34.558661 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-5f20-account-create-update-967mb" event={"ID":"4113968e-d27d-4a51-841d-7721ffb477ad","Type":"ContainerStarted","Data":"9462643dd74cc905b9bc9db03dcad66b695f426fb7a6ec197bde330a522445c9"} Jan 29 08:15:34 crc kubenswrapper[5017]: I0129 08:15:34.580572 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-nb7cn" podStartSLOduration=2.580544213 podStartE2EDuration="2.580544213s" podCreationTimestamp="2026-01-29 08:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:34.57256823 +0000 UTC m=+6020.947015850" watchObservedRunningTime="2026-01-29 08:15:34.580544213 +0000 UTC m=+6020.954991823" Jan 29 08:15:34 crc kubenswrapper[5017]: I0129 08:15:34.593779 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-5f20-account-create-update-967mb" podStartSLOduration=2.5937524720000003 podStartE2EDuration="2.593752472s" podCreationTimestamp="2026-01-29 08:15:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:34.590842121 +0000 UTC m=+6020.965289721" watchObservedRunningTime="2026-01-29 08:15:34.593752472 +0000 UTC m=+6020.968200082" Jan 29 08:15:35 crc kubenswrapper[5017]: I0129 08:15:35.570859 5017 generic.go:334] "Generic (PLEG): container finished" podID="13184a95-812b-4284-af60-4bd58429a08a" containerID="4bee9d54ae6f75e0c11ab77de6406971c610a4dd31a2b630a2cff754b42fc773" exitCode=0 Jan 29 08:15:35 crc kubenswrapper[5017]: I0129 08:15:35.570935 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nb7cn" event={"ID":"13184a95-812b-4284-af60-4bd58429a08a","Type":"ContainerDied","Data":"4bee9d54ae6f75e0c11ab77de6406971c610a4dd31a2b630a2cff754b42fc773"} Jan 29 08:15:35 crc kubenswrapper[5017]: I0129 08:15:35.574449 5017 generic.go:334] "Generic (PLEG): container finished" podID="4113968e-d27d-4a51-841d-7721ffb477ad" containerID="29f6fd19ddfccd3db9376b909ba749e4d4c301d50bfe9d13daaa4ab181411f86" exitCode=0 Jan 29 08:15:35 crc kubenswrapper[5017]: I0129 08:15:35.574529 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-5f20-account-create-update-967mb" event={"ID":"4113968e-d27d-4a51-841d-7721ffb477ad","Type":"ContainerDied","Data":"29f6fd19ddfccd3db9376b909ba749e4d4c301d50bfe9d13daaa4ab181411f86"} Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.074846 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.082838 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.228431 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13184a95-812b-4284-af60-4bd58429a08a-operator-scripts\") pod \"13184a95-812b-4284-af60-4bd58429a08a\" (UID: \"13184a95-812b-4284-af60-4bd58429a08a\") " Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.229115 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9jwp\" (UniqueName: \"kubernetes.io/projected/4113968e-d27d-4a51-841d-7721ffb477ad-kube-api-access-l9jwp\") pod \"4113968e-d27d-4a51-841d-7721ffb477ad\" (UID: \"4113968e-d27d-4a51-841d-7721ffb477ad\") " Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.229269 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4113968e-d27d-4a51-841d-7721ffb477ad-operator-scripts\") pod \"4113968e-d27d-4a51-841d-7721ffb477ad\" (UID: \"4113968e-d27d-4a51-841d-7721ffb477ad\") " Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.229906 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13184a95-812b-4284-af60-4bd58429a08a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13184a95-812b-4284-af60-4bd58429a08a" (UID: "13184a95-812b-4284-af60-4bd58429a08a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.229935 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4113968e-d27d-4a51-841d-7721ffb477ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4113968e-d27d-4a51-841d-7721ffb477ad" (UID: "4113968e-d27d-4a51-841d-7721ffb477ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.230094 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km28t\" (UniqueName: \"kubernetes.io/projected/13184a95-812b-4284-af60-4bd58429a08a-kube-api-access-km28t\") pod \"13184a95-812b-4284-af60-4bd58429a08a\" (UID: \"13184a95-812b-4284-af60-4bd58429a08a\") " Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.232430 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4113968e-d27d-4a51-841d-7721ffb477ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.232461 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13184a95-812b-4284-af60-4bd58429a08a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.235564 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4113968e-d27d-4a51-841d-7721ffb477ad-kube-api-access-l9jwp" (OuterVolumeSpecName: "kube-api-access-l9jwp") pod "4113968e-d27d-4a51-841d-7721ffb477ad" (UID: "4113968e-d27d-4a51-841d-7721ffb477ad"). InnerVolumeSpecName "kube-api-access-l9jwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.235967 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13184a95-812b-4284-af60-4bd58429a08a-kube-api-access-km28t" (OuterVolumeSpecName: "kube-api-access-km28t") pod "13184a95-812b-4284-af60-4bd58429a08a" (UID: "13184a95-812b-4284-af60-4bd58429a08a"). InnerVolumeSpecName "kube-api-access-km28t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.334518 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9jwp\" (UniqueName: \"kubernetes.io/projected/4113968e-d27d-4a51-841d-7721ffb477ad-kube-api-access-l9jwp\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.334557 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km28t\" (UniqueName: \"kubernetes.io/projected/13184a95-812b-4284-af60-4bd58429a08a-kube-api-access-km28t\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.608550 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nb7cn" event={"ID":"13184a95-812b-4284-af60-4bd58429a08a","Type":"ContainerDied","Data":"12a3e1afe3936e08709ea56867da5bbb63cfaeec6271192a10e80084178d05c2"} Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.608612 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12a3e1afe3936e08709ea56867da5bbb63cfaeec6271192a10e80084178d05c2" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.608839 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nb7cn" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.610148 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-5f20-account-create-update-967mb" event={"ID":"4113968e-d27d-4a51-841d-7721ffb477ad","Type":"ContainerDied","Data":"9462643dd74cc905b9bc9db03dcad66b695f426fb7a6ec197bde330a522445c9"} Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.610181 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-5f20-account-create-update-967mb" Jan 29 08:15:37 crc kubenswrapper[5017]: I0129 08:15:37.610184 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9462643dd74cc905b9bc9db03dcad66b695f426fb7a6ec197bde330a522445c9" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.306499 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-b4hhk"] Jan 29 08:15:38 crc kubenswrapper[5017]: E0129 08:15:38.307510 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4113968e-d27d-4a51-841d-7721ffb477ad" containerName="mariadb-account-create-update" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.307529 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4113968e-d27d-4a51-841d-7721ffb477ad" containerName="mariadb-account-create-update" Jan 29 08:15:38 crc kubenswrapper[5017]: E0129 08:15:38.307764 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13184a95-812b-4284-af60-4bd58429a08a" containerName="mariadb-database-create" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.307771 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="13184a95-812b-4284-af60-4bd58429a08a" containerName="mariadb-database-create" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.307986 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4113968e-d27d-4a51-841d-7721ffb477ad" containerName="mariadb-account-create-update" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.308019 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="13184a95-812b-4284-af60-4bd58429a08a" containerName="mariadb-database-create" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.309464 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.320719 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.320782 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.321056 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-rv242" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.321400 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.342140 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-b4hhk"] Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.462561 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-combined-ca-bundle\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.462680 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-scripts\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.462869 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-config-data\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.462906 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25kg\" (UniqueName: \"kubernetes.io/projected/eaf64798-e554-4c94-b5f1-2ee6a88852f4-kube-api-access-h25kg\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.565301 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-config-data\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.565387 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25kg\" (UniqueName: \"kubernetes.io/projected/eaf64798-e554-4c94-b5f1-2ee6a88852f4-kube-api-access-h25kg\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.565523 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-combined-ca-bundle\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.565565 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-scripts\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.570361 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-scripts\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.573672 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-config-data\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.577671 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-combined-ca-bundle\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.587287 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25kg\" (UniqueName: \"kubernetes.io/projected/eaf64798-e554-4c94-b5f1-2ee6a88852f4-kube-api-access-h25kg\") pod \"aodh-db-sync-b4hhk\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:38 crc kubenswrapper[5017]: I0129 08:15:38.642290 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:39 crc kubenswrapper[5017]: I0129 08:15:39.189690 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-b4hhk"] Jan 29 08:15:39 crc kubenswrapper[5017]: I0129 08:15:39.658801 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-b4hhk" event={"ID":"eaf64798-e554-4c94-b5f1-2ee6a88852f4","Type":"ContainerStarted","Data":"0b6195cf91f7e07e2621a9886e1af61b37486abe7271e85a76b9680874521299"} Jan 29 08:15:43 crc kubenswrapper[5017]: I0129 08:15:43.707586 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-b4hhk" event={"ID":"eaf64798-e554-4c94-b5f1-2ee6a88852f4","Type":"ContainerStarted","Data":"cdc4141ac3a037eb465be46f0647da13cbb1802b07676fbf6247cc3b19a4fd5b"} Jan 29 08:15:43 crc kubenswrapper[5017]: I0129 08:15:43.733644 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-b4hhk" podStartSLOduration=1.543355515 podStartE2EDuration="5.733624925s" podCreationTimestamp="2026-01-29 08:15:38 +0000 UTC" firstStartedPulling="2026-01-29 08:15:39.205068353 +0000 UTC m=+6025.579515963" lastFinishedPulling="2026-01-29 08:15:43.395337763 +0000 UTC m=+6029.769785373" observedRunningTime="2026-01-29 08:15:43.724909095 +0000 UTC m=+6030.099356705" watchObservedRunningTime="2026-01-29 08:15:43.733624925 +0000 UTC m=+6030.108072535" Jan 29 08:15:45 crc kubenswrapper[5017]: I0129 08:15:45.734357 5017 generic.go:334] "Generic (PLEG): container finished" podID="eaf64798-e554-4c94-b5f1-2ee6a88852f4" containerID="cdc4141ac3a037eb465be46f0647da13cbb1802b07676fbf6247cc3b19a4fd5b" exitCode=0 Jan 29 08:15:45 crc kubenswrapper[5017]: I0129 08:15:45.734449 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-b4hhk" event={"ID":"eaf64798-e554-4c94-b5f1-2ee6a88852f4","Type":"ContainerDied","Data":"cdc4141ac3a037eb465be46f0647da13cbb1802b07676fbf6247cc3b19a4fd5b"} Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.177143 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.336259 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-combined-ca-bundle\") pod \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.336579 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-scripts\") pod \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.336772 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h25kg\" (UniqueName: \"kubernetes.io/projected/eaf64798-e554-4c94-b5f1-2ee6a88852f4-kube-api-access-h25kg\") pod \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.336799 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-config-data\") pod \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\" (UID: \"eaf64798-e554-4c94-b5f1-2ee6a88852f4\") " Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.342631 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf64798-e554-4c94-b5f1-2ee6a88852f4-kube-api-access-h25kg" (OuterVolumeSpecName: "kube-api-access-h25kg") pod "eaf64798-e554-4c94-b5f1-2ee6a88852f4" (UID: "eaf64798-e554-4c94-b5f1-2ee6a88852f4"). InnerVolumeSpecName "kube-api-access-h25kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.343482 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-scripts" (OuterVolumeSpecName: "scripts") pod "eaf64798-e554-4c94-b5f1-2ee6a88852f4" (UID: "eaf64798-e554-4c94-b5f1-2ee6a88852f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.365017 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-config-data" (OuterVolumeSpecName: "config-data") pod "eaf64798-e554-4c94-b5f1-2ee6a88852f4" (UID: "eaf64798-e554-4c94-b5f1-2ee6a88852f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.383121 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaf64798-e554-4c94-b5f1-2ee6a88852f4" (UID: "eaf64798-e554-4c94-b5f1-2ee6a88852f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.441556 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h25kg\" (UniqueName: \"kubernetes.io/projected/eaf64798-e554-4c94-b5f1-2ee6a88852f4-kube-api-access-h25kg\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.441597 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.441609 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.441623 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf64798-e554-4c94-b5f1-2ee6a88852f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.761586 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-b4hhk" event={"ID":"eaf64798-e554-4c94-b5f1-2ee6a88852f4","Type":"ContainerDied","Data":"0b6195cf91f7e07e2621a9886e1af61b37486abe7271e85a76b9680874521299"} Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.761638 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b6195cf91f7e07e2621a9886e1af61b37486abe7271e85a76b9680874521299" Jan 29 08:15:47 crc kubenswrapper[5017]: I0129 08:15:47.761649 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-b4hhk" Jan 29 08:15:50 crc kubenswrapper[5017]: I0129 08:15:50.922976 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.751838 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 29 08:15:52 crc kubenswrapper[5017]: E0129 08:15:52.753355 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf64798-e554-4c94-b5f1-2ee6a88852f4" containerName="aodh-db-sync" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.753372 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf64798-e554-4c94-b5f1-2ee6a88852f4" containerName="aodh-db-sync" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.753596 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf64798-e554-4c94-b5f1-2ee6a88852f4" containerName="aodh-db-sync" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.756269 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.758484 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.758870 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.759658 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-rv242" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.768878 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.772480 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d689c40e-dc8c-4868-846f-5327f7e755a7-config-data\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.772562 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d689c40e-dc8c-4868-846f-5327f7e755a7-scripts\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.772775 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdw5\" (UniqueName: \"kubernetes.io/projected/d689c40e-dc8c-4868-846f-5327f7e755a7-kube-api-access-7hdw5\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.772992 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d689c40e-dc8c-4868-846f-5327f7e755a7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.875661 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d689c40e-dc8c-4868-846f-5327f7e755a7-config-data\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.875741 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d689c40e-dc8c-4868-846f-5327f7e755a7-scripts\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.875816 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hdw5\" (UniqueName: \"kubernetes.io/projected/d689c40e-dc8c-4868-846f-5327f7e755a7-kube-api-access-7hdw5\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.875930 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d689c40e-dc8c-4868-846f-5327f7e755a7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.887656 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d689c40e-dc8c-4868-846f-5327f7e755a7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.887652 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d689c40e-dc8c-4868-846f-5327f7e755a7-scripts\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.887771 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d689c40e-dc8c-4868-846f-5327f7e755a7-config-data\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:52 crc kubenswrapper[5017]: I0129 08:15:52.898033 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hdw5\" (UniqueName: \"kubernetes.io/projected/d689c40e-dc8c-4868-846f-5327f7e755a7-kube-api-access-7hdw5\") pod \"aodh-0\" (UID: \"d689c40e-dc8c-4868-846f-5327f7e755a7\") " pod="openstack/aodh-0" Jan 29 08:15:53 crc kubenswrapper[5017]: I0129 08:15:53.079934 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 29 08:15:53 crc kubenswrapper[5017]: I0129 08:15:53.598385 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 29 08:15:53 crc kubenswrapper[5017]: I0129 08:15:53.831498 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d689c40e-dc8c-4868-846f-5327f7e755a7","Type":"ContainerStarted","Data":"fbaa41221a6b23c55c47011e4c064147b28ab8266620b3954a577a829c9a7d71"} Jan 29 08:15:54 crc kubenswrapper[5017]: I0129 08:15:54.846763 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d689c40e-dc8c-4868-846f-5327f7e755a7","Type":"ContainerStarted","Data":"f009e29d09e790b1d1a88b3893b30363df76e835a397de69102f18bc6c1104d0"} Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.081821 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.082199 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="ceilometer-central-agent" containerID="cri-o://b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3" gracePeriod=30 Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.082824 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="proxy-httpd" containerID="cri-o://88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536" gracePeriod=30 Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.082882 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="sg-core" containerID="cri-o://4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5" gracePeriod=30 Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.082920 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="ceilometer-notification-agent" containerID="cri-o://d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e" gracePeriod=30 Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.862671 5017 generic.go:334] "Generic (PLEG): container finished" podID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerID="88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536" exitCode=0 Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.863138 5017 generic.go:334] "Generic (PLEG): container finished" podID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerID="4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5" exitCode=2 Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.862759 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerDied","Data":"88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536"} Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.863208 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerDied","Data":"4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5"} Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.863228 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerDied","Data":"b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3"} Jan 29 08:15:55 crc kubenswrapper[5017]: I0129 08:15:55.863153 5017 generic.go:334] "Generic (PLEG): container finished" podID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerID="b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3" exitCode=0 Jan 29 08:15:56 crc kubenswrapper[5017]: I0129 08:15:56.886408 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d689c40e-dc8c-4868-846f-5327f7e755a7","Type":"ContainerStarted","Data":"33fbd8a0095ad9175a519ecbeef3205a6937f81bf0d44d5295be9991781ceb46"} Jan 29 08:15:57 crc kubenswrapper[5017]: I0129 08:15:57.908906 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d689c40e-dc8c-4868-846f-5327f7e755a7","Type":"ContainerStarted","Data":"f3cf7312ad3595c0a68c9bf1e5cab128edf10d245276e80da6cd282fa0cac4b6"} Jan 29 08:15:59 crc kubenswrapper[5017]: I0129 08:15:59.933237 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d689c40e-dc8c-4868-846f-5327f7e755a7","Type":"ContainerStarted","Data":"89fa7eff41b523b8b2fcd7d2cc448236fdb281a86e4b221e821bc48cb828fed2"} Jan 29 08:15:59 crc kubenswrapper[5017]: I0129 08:15:59.967984 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.698265863 podStartE2EDuration="7.967937269s" podCreationTimestamp="2026-01-29 08:15:52 +0000 UTC" firstStartedPulling="2026-01-29 08:15:53.610016063 +0000 UTC m=+6039.984463673" lastFinishedPulling="2026-01-29 08:15:58.879687469 +0000 UTC m=+6045.254135079" observedRunningTime="2026-01-29 08:15:59.958272886 +0000 UTC m=+6046.332720516" watchObservedRunningTime="2026-01-29 08:15:59.967937269 +0000 UTC m=+6046.342384889" Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.956043 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.957488 5017 generic.go:334] "Generic (PLEG): container finished" podID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerID="d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e" exitCode=0 Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.959893 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerDied","Data":"d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e"} Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.959936 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20db9b5e-d8e6-4f73-ae02-fb92246d5557","Type":"ContainerDied","Data":"997c9435a75a61346742e692c9114e4b8d2f168389b64b334593f1ccfc794b2b"} Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.959998 5017 scope.go:117] "RemoveContainer" containerID="88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536" Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.964061 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-sg-core-conf-yaml\") pod \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.964184 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-run-httpd\") pod \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.964314 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-scripts\") pod \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.964372 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-combined-ca-bundle\") pod \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.964420 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-log-httpd\") pod \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.964473 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rb6x\" (UniqueName: \"kubernetes.io/projected/20db9b5e-d8e6-4f73-ae02-fb92246d5557-kube-api-access-2rb6x\") pod \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.964515 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-config-data\") pod \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\" (UID: \"20db9b5e-d8e6-4f73-ae02-fb92246d5557\") " Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.966303 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20db9b5e-d8e6-4f73-ae02-fb92246d5557" (UID: "20db9b5e-d8e6-4f73-ae02-fb92246d5557"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:16:00 crc kubenswrapper[5017]: I0129 08:16:00.966780 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20db9b5e-d8e6-4f73-ae02-fb92246d5557" (UID: "20db9b5e-d8e6-4f73-ae02-fb92246d5557"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.011391 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-scripts" (OuterVolumeSpecName: "scripts") pod "20db9b5e-d8e6-4f73-ae02-fb92246d5557" (UID: "20db9b5e-d8e6-4f73-ae02-fb92246d5557"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.033742 5017 scope.go:117] "RemoveContainer" containerID="4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.037310 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20db9b5e-d8e6-4f73-ae02-fb92246d5557-kube-api-access-2rb6x" (OuterVolumeSpecName: "kube-api-access-2rb6x") pod "20db9b5e-d8e6-4f73-ae02-fb92246d5557" (UID: "20db9b5e-d8e6-4f73-ae02-fb92246d5557"). InnerVolumeSpecName "kube-api-access-2rb6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.041663 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20db9b5e-d8e6-4f73-ae02-fb92246d5557" (UID: "20db9b5e-d8e6-4f73-ae02-fb92246d5557"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.066946 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.066997 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.067009 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rb6x\" (UniqueName: \"kubernetes.io/projected/20db9b5e-d8e6-4f73-ae02-fb92246d5557-kube-api-access-2rb6x\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.067019 5017 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.067027 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20db9b5e-d8e6-4f73-ae02-fb92246d5557-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.145192 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-config-data" (OuterVolumeSpecName: "config-data") pod "20db9b5e-d8e6-4f73-ae02-fb92246d5557" (UID: "20db9b5e-d8e6-4f73-ae02-fb92246d5557"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.146223 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20db9b5e-d8e6-4f73-ae02-fb92246d5557" (UID: "20db9b5e-d8e6-4f73-ae02-fb92246d5557"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.155904 5017 scope.go:117] "RemoveContainer" containerID="d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.168535 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.168575 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20db9b5e-d8e6-4f73-ae02-fb92246d5557-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.190452 5017 scope.go:117] "RemoveContainer" containerID="b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.212426 5017 scope.go:117] "RemoveContainer" containerID="88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536" Jan 29 08:16:01 crc kubenswrapper[5017]: E0129 08:16:01.212911 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536\": container with ID starting with 88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536 not found: ID does not exist" containerID="88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.212951 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536"} err="failed to get container status \"88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536\": rpc error: code = NotFound desc = could not find container \"88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536\": container with ID starting with 88fbcdd0d4fd31e6084148cf570fdd70e443bfb3efea72fb5dca6a9aed489536 not found: ID does not exist" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.212987 5017 scope.go:117] "RemoveContainer" containerID="4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5" Jan 29 08:16:01 crc kubenswrapper[5017]: E0129 08:16:01.213418 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5\": container with ID starting with 4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5 not found: ID does not exist" containerID="4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.213463 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5"} err="failed to get container status \"4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5\": rpc error: code = NotFound desc = could not find container \"4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5\": container with ID starting with 4907f0f78605d4f47a3efae37c87a8d3a93cda17cb3859f31e0efc763111f4c5 not found: ID does not exist" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.213479 5017 scope.go:117] "RemoveContainer" containerID="d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e" Jan 29 08:16:01 crc kubenswrapper[5017]: E0129 08:16:01.213999 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e\": container with ID starting with d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e not found: ID does not exist" containerID="d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.214026 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e"} err="failed to get container status \"d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e\": rpc error: code = NotFound desc = could not find container \"d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e\": container with ID starting with d2234a93bc9e780c2bac7459ef93fbb138dd0808cdbc1b86bc6edbd054d5805e not found: ID does not exist" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.214075 5017 scope.go:117] "RemoveContainer" containerID="b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3" Jan 29 08:16:01 crc kubenswrapper[5017]: E0129 08:16:01.214398 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3\": container with ID starting with b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3 not found: ID does not exist" containerID="b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.214442 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3"} err="failed to get container status \"b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3\": rpc error: code = NotFound desc = could not find container \"b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3\": container with ID starting with b82cb67be6bec889160fa2b8129bd6a9c417365d8550b868bcdb1f6db78d3db3 not found: ID does not exist" Jan 29 08:16:01 crc kubenswrapper[5017]: I0129 08:16:01.969213 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.038744 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.078306 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.135050 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:02 crc kubenswrapper[5017]: E0129 08:16:02.135649 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="sg-core" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.135674 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="sg-core" Jan 29 08:16:02 crc kubenswrapper[5017]: E0129 08:16:02.135707 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="proxy-httpd" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.135717 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="proxy-httpd" Jan 29 08:16:02 crc kubenswrapper[5017]: E0129 08:16:02.135740 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="ceilometer-notification-agent" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.135749 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="ceilometer-notification-agent" Jan 29 08:16:02 crc kubenswrapper[5017]: E0129 08:16:02.135770 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="ceilometer-central-agent" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.135778 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="ceilometer-central-agent" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.136065 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="ceilometer-notification-agent" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.136109 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="sg-core" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.136124 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="proxy-httpd" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.136148 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" containerName="ceilometer-central-agent" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.138350 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.148605 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.149035 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.151221 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.215627 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-run-httpd\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.215684 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.215725 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-scripts\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.215759 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.215778 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-log-httpd\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.215821 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-config-data\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.215896 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncvlg\" (UniqueName: \"kubernetes.io/projected/07f0862a-a973-4343-a72a-ebe0f68b68ef-kube-api-access-ncvlg\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.318304 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncvlg\" (UniqueName: \"kubernetes.io/projected/07f0862a-a973-4343-a72a-ebe0f68b68ef-kube-api-access-ncvlg\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.318358 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-run-httpd\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.318409 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.318484 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-scripts\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.318541 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.318575 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-log-httpd\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.318672 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-config-data\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.319594 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-run-httpd\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.319808 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-log-httpd\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.325117 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-scripts\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.326232 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-config-data\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.327320 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.333174 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20db9b5e-d8e6-4f73-ae02-fb92246d5557" path="/var/lib/kubelet/pods/20db9b5e-d8e6-4f73-ae02-fb92246d5557/volumes" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.334383 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.339924 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncvlg\" (UniqueName: \"kubernetes.io/projected/07f0862a-a973-4343-a72a-ebe0f68b68ef-kube-api-access-ncvlg\") pod \"ceilometer-0\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " pod="openstack/ceilometer-0" Jan 29 08:16:02 crc kubenswrapper[5017]: I0129 08:16:02.467482 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:16:03 crc kubenswrapper[5017]: I0129 08:16:03.044585 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:03 crc kubenswrapper[5017]: W0129 08:16:03.187837 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07f0862a_a973_4343_a72a_ebe0f68b68ef.slice/crio-697446214fc677e9404e5cfec3d0c11cd9b172028b5e28fe9ebeb7da6665fae8 WatchSource:0}: Error finding container 697446214fc677e9404e5cfec3d0c11cd9b172028b5e28fe9ebeb7da6665fae8: Status 404 returned error can't find the container with id 697446214fc677e9404e5cfec3d0c11cd9b172028b5e28fe9ebeb7da6665fae8 Jan 29 08:16:03 crc kubenswrapper[5017]: I0129 08:16:03.990492 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerStarted","Data":"697446214fc677e9404e5cfec3d0c11cd9b172028b5e28fe9ebeb7da6665fae8"} Jan 29 08:16:05 crc kubenswrapper[5017]: I0129 08:16:05.003261 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerStarted","Data":"5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d"} Jan 29 08:16:05 crc kubenswrapper[5017]: I0129 08:16:05.005048 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerStarted","Data":"a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412"} Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.014791 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerStarted","Data":"faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0"} Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.777256 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-flh7f"] Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.780042 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-flh7f" Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.790743 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-flh7f"] Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.845046 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh2jh\" (UniqueName: \"kubernetes.io/projected/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-kube-api-access-dh2jh\") pod \"manila-db-create-flh7f\" (UID: \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\") " pod="openstack/manila-db-create-flh7f" Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.845128 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-operator-scripts\") pod \"manila-db-create-flh7f\" (UID: \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\") " pod="openstack/manila-db-create-flh7f" Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.865930 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-424zb"] Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.882194 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.903883 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-424zb"] Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.920851 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-24d0-account-create-update-2tprl"] Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.922475 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.931062 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.938356 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-24d0-account-create-update-2tprl"] Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.947599 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh2jh\" (UniqueName: \"kubernetes.io/projected/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-kube-api-access-dh2jh\") pod \"manila-db-create-flh7f\" (UID: \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\") " pod="openstack/manila-db-create-flh7f" Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.947653 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-operator-scripts\") pod \"manila-db-create-flh7f\" (UID: \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\") " pod="openstack/manila-db-create-flh7f" Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.949083 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-operator-scripts\") pod \"manila-db-create-flh7f\" (UID: \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\") " pod="openstack/manila-db-create-flh7f" Jan 29 08:16:06 crc kubenswrapper[5017]: I0129 08:16:06.977661 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh2jh\" (UniqueName: \"kubernetes.io/projected/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-kube-api-access-dh2jh\") pod \"manila-db-create-flh7f\" (UID: \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\") " pod="openstack/manila-db-create-flh7f" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.050444 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429ded21-89c8-40cf-b233-90403c09606f-operator-scripts\") pod \"manila-24d0-account-create-update-2tprl\" (UID: \"429ded21-89c8-40cf-b233-90403c09606f\") " pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.050546 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjkt2\" (UniqueName: \"kubernetes.io/projected/c0ba96a1-7a56-4351-b801-b4c7885e0445-kube-api-access-sjkt2\") pod \"redhat-operators-424zb\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.050592 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-catalog-content\") pod \"redhat-operators-424zb\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.050714 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvlv2\" (UniqueName: \"kubernetes.io/projected/429ded21-89c8-40cf-b233-90403c09606f-kube-api-access-pvlv2\") pod \"manila-24d0-account-create-update-2tprl\" (UID: \"429ded21-89c8-40cf-b233-90403c09606f\") " pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.050738 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-utilities\") pod \"redhat-operators-424zb\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.106885 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-flh7f" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.152669 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-catalog-content\") pod \"redhat-operators-424zb\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.152916 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvlv2\" (UniqueName: \"kubernetes.io/projected/429ded21-89c8-40cf-b233-90403c09606f-kube-api-access-pvlv2\") pod \"manila-24d0-account-create-update-2tprl\" (UID: \"429ded21-89c8-40cf-b233-90403c09606f\") " pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.153354 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-utilities\") pod \"redhat-operators-424zb\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.153398 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-catalog-content\") pod \"redhat-operators-424zb\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.153676 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-utilities\") pod \"redhat-operators-424zb\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.153789 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429ded21-89c8-40cf-b233-90403c09606f-operator-scripts\") pod \"manila-24d0-account-create-update-2tprl\" (UID: \"429ded21-89c8-40cf-b233-90403c09606f\") " pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.153852 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjkt2\" (UniqueName: \"kubernetes.io/projected/c0ba96a1-7a56-4351-b801-b4c7885e0445-kube-api-access-sjkt2\") pod \"redhat-operators-424zb\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.154728 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429ded21-89c8-40cf-b233-90403c09606f-operator-scripts\") pod \"manila-24d0-account-create-update-2tprl\" (UID: \"429ded21-89c8-40cf-b233-90403c09606f\") " pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.180988 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvlv2\" (UniqueName: \"kubernetes.io/projected/429ded21-89c8-40cf-b233-90403c09606f-kube-api-access-pvlv2\") pod \"manila-24d0-account-create-update-2tprl\" (UID: \"429ded21-89c8-40cf-b233-90403c09606f\") " pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.181356 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjkt2\" (UniqueName: \"kubernetes.io/projected/c0ba96a1-7a56-4351-b801-b4c7885e0445-kube-api-access-sjkt2\") pod \"redhat-operators-424zb\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.221443 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.263840 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:07 crc kubenswrapper[5017]: I0129 08:16:07.927233 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-flh7f"] Jan 29 08:16:08 crc kubenswrapper[5017]: I0129 08:16:08.025516 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-24d0-account-create-update-2tprl"] Jan 29 08:16:08 crc kubenswrapper[5017]: W0129 08:16:08.039426 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0ba96a1_7a56_4351_b801_b4c7885e0445.slice/crio-e8b615ce433820523ba677f2cbaedfdcba7e6260cdefdde4ecbf9394361627dd WatchSource:0}: Error finding container e8b615ce433820523ba677f2cbaedfdcba7e6260cdefdde4ecbf9394361627dd: Status 404 returned error can't find the container with id e8b615ce433820523ba677f2cbaedfdcba7e6260cdefdde4ecbf9394361627dd Jan 29 08:16:08 crc kubenswrapper[5017]: I0129 08:16:08.041348 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-424zb"] Jan 29 08:16:08 crc kubenswrapper[5017]: I0129 08:16:08.060248 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-flh7f" event={"ID":"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2","Type":"ContainerStarted","Data":"3bf01358ff363763e8f96d36b14285116e3c96e8773fb0d794f185c3b36c115d"} Jan 29 08:16:08 crc kubenswrapper[5017]: I0129 08:16:08.063296 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerStarted","Data":"9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508"} Jan 29 08:16:08 crc kubenswrapper[5017]: I0129 08:16:08.063501 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 08:16:08 crc kubenswrapper[5017]: I0129 08:16:08.092277 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.178630104 podStartE2EDuration="6.092256428s" podCreationTimestamp="2026-01-29 08:16:02 +0000 UTC" firstStartedPulling="2026-01-29 08:16:03.191231969 +0000 UTC m=+6049.565679579" lastFinishedPulling="2026-01-29 08:16:07.104858303 +0000 UTC m=+6053.479305903" observedRunningTime="2026-01-29 08:16:08.089359638 +0000 UTC m=+6054.463807248" watchObservedRunningTime="2026-01-29 08:16:08.092256428 +0000 UTC m=+6054.466704038" Jan 29 08:16:09 crc kubenswrapper[5017]: I0129 08:16:09.095195 5017 generic.go:334] "Generic (PLEG): container finished" podID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerID="8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21" exitCode=0 Jan 29 08:16:09 crc kubenswrapper[5017]: I0129 08:16:09.096503 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-424zb" event={"ID":"c0ba96a1-7a56-4351-b801-b4c7885e0445","Type":"ContainerDied","Data":"8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21"} Jan 29 08:16:09 crc kubenswrapper[5017]: I0129 08:16:09.096572 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-424zb" event={"ID":"c0ba96a1-7a56-4351-b801-b4c7885e0445","Type":"ContainerStarted","Data":"e8b615ce433820523ba677f2cbaedfdcba7e6260cdefdde4ecbf9394361627dd"} Jan 29 08:16:09 crc kubenswrapper[5017]: I0129 08:16:09.105854 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:16:09 crc kubenswrapper[5017]: I0129 08:16:09.108436 5017 generic.go:334] "Generic (PLEG): container finished" podID="b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2" containerID="c0f7ba883627784e07548bb496bc6ecfb460337d088844aa82b0d76757539040" exitCode=0 Jan 29 08:16:09 crc kubenswrapper[5017]: I0129 08:16:09.108662 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-flh7f" event={"ID":"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2","Type":"ContainerDied","Data":"c0f7ba883627784e07548bb496bc6ecfb460337d088844aa82b0d76757539040"} Jan 29 08:16:09 crc kubenswrapper[5017]: I0129 08:16:09.126424 5017 generic.go:334] "Generic (PLEG): container finished" podID="429ded21-89c8-40cf-b233-90403c09606f" containerID="6ff2ca958022faa8333093925ca0c47c007210b4c567512b1614211a6292cdb7" exitCode=0 Jan 29 08:16:09 crc kubenswrapper[5017]: I0129 08:16:09.127578 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-24d0-account-create-update-2tprl" event={"ID":"429ded21-89c8-40cf-b233-90403c09606f","Type":"ContainerDied","Data":"6ff2ca958022faa8333093925ca0c47c007210b4c567512b1614211a6292cdb7"} Jan 29 08:16:09 crc kubenswrapper[5017]: I0129 08:16:09.127681 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-24d0-account-create-update-2tprl" event={"ID":"429ded21-89c8-40cf-b233-90403c09606f","Type":"ContainerStarted","Data":"0a2b0609a3057da2f01e319544074c155ac858bf85bc5c6ffe217648ecb084a3"} Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.140550 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-424zb" event={"ID":"c0ba96a1-7a56-4351-b801-b4c7885e0445","Type":"ContainerStarted","Data":"44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7"} Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.514619 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-flh7f" Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.572518 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh2jh\" (UniqueName: \"kubernetes.io/projected/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-kube-api-access-dh2jh\") pod \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\" (UID: \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\") " Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.572719 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-operator-scripts\") pod \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\" (UID: \"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2\") " Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.573928 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2" (UID: "b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.581516 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-kube-api-access-dh2jh" (OuterVolumeSpecName: "kube-api-access-dh2jh") pod "b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2" (UID: "b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2"). InnerVolumeSpecName "kube-api-access-dh2jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.633818 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.674943 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429ded21-89c8-40cf-b233-90403c09606f-operator-scripts\") pod \"429ded21-89c8-40cf-b233-90403c09606f\" (UID: \"429ded21-89c8-40cf-b233-90403c09606f\") " Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.675206 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvlv2\" (UniqueName: \"kubernetes.io/projected/429ded21-89c8-40cf-b233-90403c09606f-kube-api-access-pvlv2\") pod \"429ded21-89c8-40cf-b233-90403c09606f\" (UID: \"429ded21-89c8-40cf-b233-90403c09606f\") " Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.675489 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/429ded21-89c8-40cf-b233-90403c09606f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "429ded21-89c8-40cf-b233-90403c09606f" (UID: "429ded21-89c8-40cf-b233-90403c09606f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.675692 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429ded21-89c8-40cf-b233-90403c09606f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.675706 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh2jh\" (UniqueName: \"kubernetes.io/projected/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-kube-api-access-dh2jh\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.675718 5017 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.679752 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429ded21-89c8-40cf-b233-90403c09606f-kube-api-access-pvlv2" (OuterVolumeSpecName: "kube-api-access-pvlv2") pod "429ded21-89c8-40cf-b233-90403c09606f" (UID: "429ded21-89c8-40cf-b233-90403c09606f"). InnerVolumeSpecName "kube-api-access-pvlv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:10 crc kubenswrapper[5017]: I0129 08:16:10.780094 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvlv2\" (UniqueName: \"kubernetes.io/projected/429ded21-89c8-40cf-b233-90403c09606f-kube-api-access-pvlv2\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:11 crc kubenswrapper[5017]: I0129 08:16:11.153888 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-24d0-account-create-update-2tprl" event={"ID":"429ded21-89c8-40cf-b233-90403c09606f","Type":"ContainerDied","Data":"0a2b0609a3057da2f01e319544074c155ac858bf85bc5c6ffe217648ecb084a3"} Jan 29 08:16:11 crc kubenswrapper[5017]: I0129 08:16:11.154450 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a2b0609a3057da2f01e319544074c155ac858bf85bc5c6ffe217648ecb084a3" Jan 29 08:16:11 crc kubenswrapper[5017]: I0129 08:16:11.154395 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-24d0-account-create-update-2tprl" Jan 29 08:16:11 crc kubenswrapper[5017]: I0129 08:16:11.157793 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-flh7f" event={"ID":"b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2","Type":"ContainerDied","Data":"3bf01358ff363763e8f96d36b14285116e3c96e8773fb0d794f185c3b36c115d"} Jan 29 08:16:11 crc kubenswrapper[5017]: I0129 08:16:11.157815 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf01358ff363763e8f96d36b14285116e3c96e8773fb0d794f185c3b36c115d" Jan 29 08:16:11 crc kubenswrapper[5017]: I0129 08:16:11.157935 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-flh7f" Jan 29 08:16:11 crc kubenswrapper[5017]: E0129 08:16:11.423908 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod429ded21_89c8_40cf_b233_90403c09606f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod429ded21_89c8_40cf_b233_90403c09606f.slice/crio-0a2b0609a3057da2f01e319544074c155ac858bf85bc5c6ffe217648ecb084a3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6d4fe95_8d0c_4b4f_84ea_b2b1338f5bf2.slice/crio-3bf01358ff363763e8f96d36b14285116e3c96e8773fb0d794f185c3b36c115d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6d4fe95_8d0c_4b4f_84ea_b2b1338f5bf2.slice\": RecentStats: unable to find data in memory cache]" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.406159 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-jkbkt"] Jan 29 08:16:12 crc kubenswrapper[5017]: E0129 08:16:12.407253 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429ded21-89c8-40cf-b233-90403c09606f" containerName="mariadb-account-create-update" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.407275 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="429ded21-89c8-40cf-b233-90403c09606f" containerName="mariadb-account-create-update" Jan 29 08:16:12 crc kubenswrapper[5017]: E0129 08:16:12.407334 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2" containerName="mariadb-database-create" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.407343 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2" containerName="mariadb-database-create" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.407589 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2" containerName="mariadb-database-create" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.407619 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="429ded21-89c8-40cf-b233-90403c09606f" containerName="mariadb-account-create-update" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.408709 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.411176 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.411323 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-7sbcd" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.423834 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jkbkt"] Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.424794 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-job-config-data\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.427717 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5s6g\" (UniqueName: \"kubernetes.io/projected/8eb5a3e4-977b-403a-962e-ab8e0178dca1-kube-api-access-v5s6g\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.428083 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-config-data\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.428293 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-combined-ca-bundle\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.530611 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-combined-ca-bundle\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.530773 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-job-config-data\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.530831 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5s6g\" (UniqueName: \"kubernetes.io/projected/8eb5a3e4-977b-403a-962e-ab8e0178dca1-kube-api-access-v5s6g\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.530918 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-config-data\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.538930 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-job-config-data\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.539508 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-combined-ca-bundle\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.539517 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-config-data\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.549262 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5s6g\" (UniqueName: \"kubernetes.io/projected/8eb5a3e4-977b-403a-962e-ab8e0178dca1-kube-api-access-v5s6g\") pod \"manila-db-sync-jkbkt\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:12 crc kubenswrapper[5017]: I0129 08:16:12.747787 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:14 crc kubenswrapper[5017]: I0129 08:16:14.179271 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jkbkt"] Jan 29 08:16:14 crc kubenswrapper[5017]: W0129 08:16:14.185845 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb5a3e4_977b_403a_962e_ab8e0178dca1.slice/crio-7e0564f14b65829c29da84e7902c168a4cf2c791c96d38c6d72491d64be6c6c5 WatchSource:0}: Error finding container 7e0564f14b65829c29da84e7902c168a4cf2c791c96d38c6d72491d64be6c6c5: Status 404 returned error can't find the container with id 7e0564f14b65829c29da84e7902c168a4cf2c791c96d38c6d72491d64be6c6c5 Jan 29 08:16:14 crc kubenswrapper[5017]: I0129 08:16:14.215662 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jkbkt" event={"ID":"8eb5a3e4-977b-403a-962e-ab8e0178dca1","Type":"ContainerStarted","Data":"7e0564f14b65829c29da84e7902c168a4cf2c791c96d38c6d72491d64be6c6c5"} Jan 29 08:16:16 crc kubenswrapper[5017]: I0129 08:16:16.239545 5017 generic.go:334] "Generic (PLEG): container finished" podID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerID="44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7" exitCode=0 Jan 29 08:16:16 crc kubenswrapper[5017]: I0129 08:16:16.239651 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-424zb" event={"ID":"c0ba96a1-7a56-4351-b801-b4c7885e0445","Type":"ContainerDied","Data":"44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7"} Jan 29 08:16:20 crc kubenswrapper[5017]: I0129 08:16:20.290022 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-424zb" event={"ID":"c0ba96a1-7a56-4351-b801-b4c7885e0445","Type":"ContainerStarted","Data":"750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f"} Jan 29 08:16:20 crc kubenswrapper[5017]: I0129 08:16:20.292432 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jkbkt" event={"ID":"8eb5a3e4-977b-403a-962e-ab8e0178dca1","Type":"ContainerStarted","Data":"125368aa90d5fb8e2cc2937041e9d92ce6737cffdc1e24011a472523bf80fc9b"} Jan 29 08:16:20 crc kubenswrapper[5017]: I0129 08:16:20.340699 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-424zb" podStartSLOduration=4.096026167 podStartE2EDuration="14.340677409s" podCreationTimestamp="2026-01-29 08:16:06 +0000 UTC" firstStartedPulling="2026-01-29 08:16:09.103072528 +0000 UTC m=+6055.477520138" lastFinishedPulling="2026-01-29 08:16:19.34772377 +0000 UTC m=+6065.722171380" observedRunningTime="2026-01-29 08:16:20.313761148 +0000 UTC m=+6066.688208758" watchObservedRunningTime="2026-01-29 08:16:20.340677409 +0000 UTC m=+6066.715125019" Jan 29 08:16:20 crc kubenswrapper[5017]: I0129 08:16:20.348855 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-jkbkt" podStartSLOduration=3.241944321 podStartE2EDuration="8.348836156s" podCreationTimestamp="2026-01-29 08:16:12 +0000 UTC" firstStartedPulling="2026-01-29 08:16:14.188168884 +0000 UTC m=+6060.562616494" lastFinishedPulling="2026-01-29 08:16:19.295060719 +0000 UTC m=+6065.669508329" observedRunningTime="2026-01-29 08:16:20.336481288 +0000 UTC m=+6066.710928898" watchObservedRunningTime="2026-01-29 08:16:20.348836156 +0000 UTC m=+6066.723283766" Jan 29 08:16:22 crc kubenswrapper[5017]: I0129 08:16:22.323037 5017 generic.go:334] "Generic (PLEG): container finished" podID="8eb5a3e4-977b-403a-962e-ab8e0178dca1" containerID="125368aa90d5fb8e2cc2937041e9d92ce6737cffdc1e24011a472523bf80fc9b" exitCode=0 Jan 29 08:16:22 crc kubenswrapper[5017]: I0129 08:16:22.329916 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jkbkt" event={"ID":"8eb5a3e4-977b-403a-962e-ab8e0178dca1","Type":"ContainerDied","Data":"125368aa90d5fb8e2cc2937041e9d92ce6737cffdc1e24011a472523bf80fc9b"} Jan 29 08:16:23 crc kubenswrapper[5017]: I0129 08:16:23.813074 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:23 crc kubenswrapper[5017]: I0129 08:16:23.923732 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-config-data\") pod \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " Jan 29 08:16:23 crc kubenswrapper[5017]: I0129 08:16:23.923854 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-combined-ca-bundle\") pod \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " Jan 29 08:16:23 crc kubenswrapper[5017]: I0129 08:16:23.923941 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-job-config-data\") pod \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " Jan 29 08:16:23 crc kubenswrapper[5017]: I0129 08:16:23.924087 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5s6g\" (UniqueName: \"kubernetes.io/projected/8eb5a3e4-977b-403a-962e-ab8e0178dca1-kube-api-access-v5s6g\") pod \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\" (UID: \"8eb5a3e4-977b-403a-962e-ab8e0178dca1\") " Jan 29 08:16:23 crc kubenswrapper[5017]: I0129 08:16:23.931175 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "8eb5a3e4-977b-403a-962e-ab8e0178dca1" (UID: "8eb5a3e4-977b-403a-962e-ab8e0178dca1"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:23 crc kubenswrapper[5017]: I0129 08:16:23.931688 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb5a3e4-977b-403a-962e-ab8e0178dca1-kube-api-access-v5s6g" (OuterVolumeSpecName: "kube-api-access-v5s6g") pod "8eb5a3e4-977b-403a-962e-ab8e0178dca1" (UID: "8eb5a3e4-977b-403a-962e-ab8e0178dca1"). InnerVolumeSpecName "kube-api-access-v5s6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:23 crc kubenswrapper[5017]: I0129 08:16:23.936064 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-config-data" (OuterVolumeSpecName: "config-data") pod "8eb5a3e4-977b-403a-962e-ab8e0178dca1" (UID: "8eb5a3e4-977b-403a-962e-ab8e0178dca1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:23 crc kubenswrapper[5017]: I0129 08:16:23.962824 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eb5a3e4-977b-403a-962e-ab8e0178dca1" (UID: "8eb5a3e4-977b-403a-962e-ab8e0178dca1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:24 crc kubenswrapper[5017]: I0129 08:16:24.026835 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5s6g\" (UniqueName: \"kubernetes.io/projected/8eb5a3e4-977b-403a-962e-ab8e0178dca1-kube-api-access-v5s6g\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:24 crc kubenswrapper[5017]: I0129 08:16:24.026893 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:24 crc kubenswrapper[5017]: I0129 08:16:24.026910 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:24 crc kubenswrapper[5017]: I0129 08:16:24.026926 5017 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8eb5a3e4-977b-403a-962e-ab8e0178dca1-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:24 crc kubenswrapper[5017]: I0129 08:16:24.356586 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jkbkt" event={"ID":"8eb5a3e4-977b-403a-962e-ab8e0178dca1","Type":"ContainerDied","Data":"7e0564f14b65829c29da84e7902c168a4cf2c791c96d38c6d72491d64be6c6c5"} Jan 29 08:16:24 crc kubenswrapper[5017]: I0129 08:16:24.356635 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e0564f14b65829c29da84e7902c168a4cf2c791c96d38c6d72491d64be6c6c5" Jan 29 08:16:24 crc kubenswrapper[5017]: I0129 08:16:24.356692 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jkbkt" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.359029 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 08:16:25 crc kubenswrapper[5017]: E0129 08:16:25.361248 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb5a3e4-977b-403a-962e-ab8e0178dca1" containerName="manila-db-sync" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.361361 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb5a3e4-977b-403a-962e-ab8e0178dca1" containerName="manila-db-sync" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.361712 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb5a3e4-977b-403a-962e-ab8e0178dca1" containerName="manila-db-sync" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.363207 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.372118 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-7sbcd" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.372427 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.374772 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.375142 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.396847 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.432182 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.434691 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.455323 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.487799 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d1d4579f-eefb-4043-b0a1-e3326d19bd24-ceph\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.487921 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d1d4579f-eefb-4043-b0a1-e3326d19bd24-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.488066 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-config-data\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.488162 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.488375 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.488436 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.488897 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn2q\" (UniqueName: \"kubernetes.io/projected/d1d4579f-eefb-4043-b0a1-e3326d19bd24-kube-api-access-2fn2q\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.489645 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1d4579f-eefb-4043-b0a1-e3326d19bd24-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.489691 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-scripts\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.557988 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bdbdd675-6qdkz"] Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.560492 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592114 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1d4579f-eefb-4043-b0a1-e3326d19bd24-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592177 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-scripts\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592232 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dnp\" (UniqueName: \"kubernetes.io/projected/c25c0058-8e1c-428b-8955-21f70c22b5e5-kube-api-access-r7dnp\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592293 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d1d4579f-eefb-4043-b0a1-e3326d19bd24-ceph\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592330 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d1d4579f-eefb-4043-b0a1-e3326d19bd24-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592369 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c25c0058-8e1c-428b-8955-21f70c22b5e5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592401 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-config-data\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592440 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-scripts\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592468 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592497 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592573 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592614 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fn2q\" (UniqueName: \"kubernetes.io/projected/d1d4579f-eefb-4043-b0a1-e3326d19bd24-kube-api-access-2fn2q\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592656 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-config-data\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.592684 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.596201 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1d4579f-eefb-4043-b0a1-e3326d19bd24-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.596343 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/d1d4579f-eefb-4043-b0a1-e3326d19bd24-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.603145 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-scripts\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.603539 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d1d4579f-eefb-4043-b0a1-e3326d19bd24-ceph\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.604605 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.607369 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bdbdd675-6qdkz"] Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.617740 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-config-data\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.648244 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1d4579f-eefb-4043-b0a1-e3326d19bd24-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.655277 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fn2q\" (UniqueName: \"kubernetes.io/projected/d1d4579f-eefb-4043-b0a1-e3326d19bd24-kube-api-access-2fn2q\") pod \"manila-share-share1-0\" (UID: \"d1d4579f-eefb-4043-b0a1-e3326d19bd24\") " pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710416 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c25c0058-8e1c-428b-8955-21f70c22b5e5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710484 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-dns-svc\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710519 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-scripts\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710548 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z5x7\" (UniqueName: \"kubernetes.io/projected/44794862-f9c4-403b-bc6e-f3dd9c42cd85-kube-api-access-2z5x7\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710574 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710678 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-config-data\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710705 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710744 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-sb\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710791 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-nb\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710814 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-config\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.710836 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dnp\" (UniqueName: \"kubernetes.io/projected/c25c0058-8e1c-428b-8955-21f70c22b5e5-kube-api-access-r7dnp\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.711305 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c25c0058-8e1c-428b-8955-21f70c22b5e5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.727554 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-scripts\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.739583 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.746991 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.755062 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25c0058-8e1c-428b-8955-21f70c22b5e5-config-data\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.755544 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dnp\" (UniqueName: \"kubernetes.io/projected/c25c0058-8e1c-428b-8955-21f70c22b5e5-kube-api-access-r7dnp\") pod \"manila-scheduler-0\" (UID: \"c25c0058-8e1c-428b-8955-21f70c22b5e5\") " pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.756835 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.794653 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.812627 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-sb\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.812695 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-nb\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.812720 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-config\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.812792 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-dns-svc\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.812831 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z5x7\" (UniqueName: \"kubernetes.io/projected/44794862-f9c4-403b-bc6e-f3dd9c42cd85-kube-api-access-2z5x7\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.815539 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-sb\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.815865 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-config\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.816420 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-nb\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.816467 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.818614 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.833348 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.837504 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-dns-svc\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.860114 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z5x7\" (UniqueName: \"kubernetes.io/projected/44794862-f9c4-403b-bc6e-f3dd9c42cd85-kube-api-access-2z5x7\") pod \"dnsmasq-dns-bdbdd675-6qdkz\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.867813 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.892053 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.924138 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a429f99-e77e-4a19-9293-1fcb5f49aa80-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.924411 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-config-data-custom\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.924440 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-scripts\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.924855 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-config-data\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.924906 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a429f99-e77e-4a19-9293-1fcb5f49aa80-logs\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.925004 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:25 crc kubenswrapper[5017]: I0129 08:16:25.925054 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvh86\" (UniqueName: \"kubernetes.io/projected/3a429f99-e77e-4a19-9293-1fcb5f49aa80-kube-api-access-xvh86\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.029423 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-config-data\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.029492 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a429f99-e77e-4a19-9293-1fcb5f49aa80-logs\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.029544 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.029585 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvh86\" (UniqueName: \"kubernetes.io/projected/3a429f99-e77e-4a19-9293-1fcb5f49aa80-kube-api-access-xvh86\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.029644 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a429f99-e77e-4a19-9293-1fcb5f49aa80-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.029740 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-config-data-custom\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.029757 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-scripts\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.034923 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a429f99-e77e-4a19-9293-1fcb5f49aa80-logs\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.036380 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a429f99-e77e-4a19-9293-1fcb5f49aa80-etc-machine-id\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.047017 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.047742 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-config-data-custom\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.053677 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-scripts\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.055024 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a429f99-e77e-4a19-9293-1fcb5f49aa80-config-data\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.057143 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvh86\" (UniqueName: \"kubernetes.io/projected/3a429f99-e77e-4a19-9293-1fcb5f49aa80-kube-api-access-xvh86\") pod \"manila-api-0\" (UID: \"3a429f99-e77e-4a19-9293-1fcb5f49aa80\") " pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.222909 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.603366 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.634125 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 08:16:26 crc kubenswrapper[5017]: W0129 08:16:26.702374 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44794862_f9c4_403b_bc6e_f3dd9c42cd85.slice/crio-dbdda98008ea7f3824a77219aa6ad7c156929fbcfd3e7afa6df54efaf758d137 WatchSource:0}: Error finding container dbdda98008ea7f3824a77219aa6ad7c156929fbcfd3e7afa6df54efaf758d137: Status 404 returned error can't find the container with id dbdda98008ea7f3824a77219aa6ad7c156929fbcfd3e7afa6df54efaf758d137 Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.703915 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bdbdd675-6qdkz"] Jan 29 08:16:26 crc kubenswrapper[5017]: I0129 08:16:26.964821 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 08:16:26 crc kubenswrapper[5017]: W0129 08:16:26.971633 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a429f99_e77e_4a19_9293_1fcb5f49aa80.slice/crio-6303997badf3d5104e9f0fc01b1a5566a5b67a8a6e8668ffa2b5e008c306fdef WatchSource:0}: Error finding container 6303997badf3d5104e9f0fc01b1a5566a5b67a8a6e8668ffa2b5e008c306fdef: Status 404 returned error can't find the container with id 6303997badf3d5104e9f0fc01b1a5566a5b67a8a6e8668ffa2b5e008c306fdef Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.055299 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9099-account-create-update-xsb2t"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.068696 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-68ed-account-create-update-flqjd"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.080058 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7qnrn"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.092615 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0b9e-account-create-update-qccg5"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.113448 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-c9sv7"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.132871 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-v4l25"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.150857 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9099-account-create-update-xsb2t"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.165522 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0b9e-account-create-update-qccg5"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.186542 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-68ed-account-create-update-flqjd"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.202989 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-v4l25"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.222183 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.222256 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.231632 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7qnrn"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.243613 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-c9sv7"] Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.422852 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d1d4579f-eefb-4043-b0a1-e3326d19bd24","Type":"ContainerStarted","Data":"d169073e3c6590139e731ab908e07121ef22cb186a70d52189d89b4562c48b43"} Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.425506 5017 generic.go:334] "Generic (PLEG): container finished" podID="44794862-f9c4-403b-bc6e-f3dd9c42cd85" containerID="8f5c8f9e02de79670b29a75ae774a9f9a4367b07ed6b22bf4bdb14f7a508623f" exitCode=0 Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.425742 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" event={"ID":"44794862-f9c4-403b-bc6e-f3dd9c42cd85","Type":"ContainerDied","Data":"8f5c8f9e02de79670b29a75ae774a9f9a4367b07ed6b22bf4bdb14f7a508623f"} Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.425777 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" event={"ID":"44794862-f9c4-403b-bc6e-f3dd9c42cd85","Type":"ContainerStarted","Data":"dbdda98008ea7f3824a77219aa6ad7c156929fbcfd3e7afa6df54efaf758d137"} Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.428392 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c25c0058-8e1c-428b-8955-21f70c22b5e5","Type":"ContainerStarted","Data":"ed9aaa66d79a497d772c979b5494f20385f22d452cdeda46d55b06ab4033b89e"} Jan 29 08:16:27 crc kubenswrapper[5017]: I0129 08:16:27.431934 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a429f99-e77e-4a19-9293-1fcb5f49aa80","Type":"ContainerStarted","Data":"6303997badf3d5104e9f0fc01b1a5566a5b67a8a6e8668ffa2b5e008c306fdef"} Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.312434 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-424zb" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="registry-server" probeResult="failure" output=< Jan 29 08:16:28 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:16:28 crc kubenswrapper[5017]: > Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.340285 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2347b806-7dec-460c-b4bc-aa0d3610d919" path="/var/lib/kubelet/pods/2347b806-7dec-460c-b4bc-aa0d3610d919/volumes" Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.342151 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36fd9878-0286-43b5-a979-72c8c4a4ef4a" path="/var/lib/kubelet/pods/36fd9878-0286-43b5-a979-72c8c4a4ef4a/volumes" Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.342906 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51de0e5f-ded1-478a-aeeb-ace024d4e989" path="/var/lib/kubelet/pods/51de0e5f-ded1-478a-aeeb-ace024d4e989/volumes" Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.343903 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9387069d-a63d-4c3a-8eca-ec58392dbc4f" path="/var/lib/kubelet/pods/9387069d-a63d-4c3a-8eca-ec58392dbc4f/volumes" Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.345372 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf9739d1-fc25-47b0-ab03-fe97aa0f4450" path="/var/lib/kubelet/pods/cf9739d1-fc25-47b0-ab03-fe97aa0f4450/volumes" Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.346072 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6489a54-ddc7-4aec-9a44-1230030f9481" path="/var/lib/kubelet/pods/d6489a54-ddc7-4aec-9a44-1230030f9481/volumes" Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.463286 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c25c0058-8e1c-428b-8955-21f70c22b5e5","Type":"ContainerStarted","Data":"4040aa2d78ce5a9c213f166bfc52adc9fab03dd8f336c272b5460462326452d5"} Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.473640 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a429f99-e77e-4a19-9293-1fcb5f49aa80","Type":"ContainerStarted","Data":"3949c1c357012720d961d1c0a25e45ae4172b883d19f8dfd0ef4b58e71993838"} Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.473708 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3a429f99-e77e-4a19-9293-1fcb5f49aa80","Type":"ContainerStarted","Data":"8a46575b15d5479c945b3125cf9b7d4fd91a217f168c25158b777642ea02b0d1"} Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.475062 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.483775 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" event={"ID":"44794862-f9c4-403b-bc6e-f3dd9c42cd85","Type":"ContainerStarted","Data":"a58b1211e06163f8b04c4540430e8ffc16c5b0709805dfb163719aaf3a69ca0a"} Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.484149 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.524104 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.524072226 podStartE2EDuration="3.524072226s" podCreationTimestamp="2026-01-29 08:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:16:28.490805523 +0000 UTC m=+6074.865253143" watchObservedRunningTime="2026-01-29 08:16:28.524072226 +0000 UTC m=+6074.898519846" Jan 29 08:16:28 crc kubenswrapper[5017]: I0129 08:16:28.528217 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" podStartSLOduration=3.528196705 podStartE2EDuration="3.528196705s" podCreationTimestamp="2026-01-29 08:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:16:28.518202744 +0000 UTC m=+6074.892650354" watchObservedRunningTime="2026-01-29 08:16:28.528196705 +0000 UTC m=+6074.902644315" Jan 29 08:16:29 crc kubenswrapper[5017]: I0129 08:16:29.512883 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c25c0058-8e1c-428b-8955-21f70c22b5e5","Type":"ContainerStarted","Data":"60d217d2bace9e2d239ac6db57a187c0f27c742fa6f0e67a896b4e6ff5152637"} Jan 29 08:16:29 crc kubenswrapper[5017]: I0129 08:16:29.548034 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.534989051 podStartE2EDuration="4.548009383s" podCreationTimestamp="2026-01-29 08:16:25 +0000 UTC" firstStartedPulling="2026-01-29 08:16:26.623527202 +0000 UTC m=+6072.997974812" lastFinishedPulling="2026-01-29 08:16:27.636547534 +0000 UTC m=+6074.010995144" observedRunningTime="2026-01-29 08:16:29.536495544 +0000 UTC m=+6075.910943154" watchObservedRunningTime="2026-01-29 08:16:29.548009383 +0000 UTC m=+6075.922456993" Jan 29 08:16:32 crc kubenswrapper[5017]: I0129 08:16:32.595599 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 08:16:33 crc kubenswrapper[5017]: I0129 08:16:33.024419 5017 scope.go:117] "RemoveContainer" containerID="b5c24ed1eb0847aa7b57d86e669db3e50b586126b0755812de3bf05fbfc9166b" Jan 29 08:16:34 crc kubenswrapper[5017]: I0129 08:16:34.403538 5017 scope.go:117] "RemoveContainer" containerID="4d34f4a24875a6ba96c5b2f047a613fbe8db3198d78ec6bde6d0f867ba063ef5" Jan 29 08:16:34 crc kubenswrapper[5017]: I0129 08:16:34.463613 5017 scope.go:117] "RemoveContainer" containerID="de5885b812adb71f54e4a354a41a99de98154a421b3ca4af7edd43cafa3b610a" Jan 29 08:16:34 crc kubenswrapper[5017]: I0129 08:16:34.610044 5017 scope.go:117] "RemoveContainer" containerID="e158ce8600919676fcf3d9f970a0145bfa2d4d34d1217e6614f74c1436018d7a" Jan 29 08:16:34 crc kubenswrapper[5017]: I0129 08:16:34.709053 5017 scope.go:117] "RemoveContainer" containerID="963278024b402400ecac71127df61e12f1cbb3581f17e9ec8143518a976b103e" Jan 29 08:16:34 crc kubenswrapper[5017]: I0129 08:16:34.749771 5017 scope.go:117] "RemoveContainer" containerID="0afd2fc41daf290e3ee6037a685a4991454a4c5ab6cafb1cd0eafda127f6ecd1" Jan 29 08:16:35 crc kubenswrapper[5017]: I0129 08:16:35.602667 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d1d4579f-eefb-4043-b0a1-e3326d19bd24","Type":"ContainerStarted","Data":"cb22628f44c244342e78b1f1aeff7a3e3af9ef48e08d7b76f4ed76900a66e2fc"} Jan 29 08:16:35 crc kubenswrapper[5017]: I0129 08:16:35.603138 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"d1d4579f-eefb-4043-b0a1-e3326d19bd24","Type":"ContainerStarted","Data":"1fc6a438256fb44bffd06656047a8c561438ac7240e1f1aa93f36e742f09416c"} Jan 29 08:16:35 crc kubenswrapper[5017]: I0129 08:16:35.630776 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.78119449 podStartE2EDuration="10.630751322s" podCreationTimestamp="2026-01-29 08:16:25 +0000 UTC" firstStartedPulling="2026-01-29 08:16:26.622315933 +0000 UTC m=+6072.996763543" lastFinishedPulling="2026-01-29 08:16:34.471872765 +0000 UTC m=+6080.846320375" observedRunningTime="2026-01-29 08:16:35.61995031 +0000 UTC m=+6081.994397940" watchObservedRunningTime="2026-01-29 08:16:35.630751322 +0000 UTC m=+6082.005198932" Jan 29 08:16:35 crc kubenswrapper[5017]: I0129 08:16:35.758396 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 29 08:16:35 crc kubenswrapper[5017]: I0129 08:16:35.796323 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 29 08:16:35 crc kubenswrapper[5017]: I0129 08:16:35.894995 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:16:35 crc kubenswrapper[5017]: I0129 08:16:35.999414 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6575ddf6cf-qhgvn"] Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.000945 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" podUID="9a2bc012-7119-4c7b-b236-e508f10b47c1" containerName="dnsmasq-dns" containerID="cri-o://33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58" gracePeriod=10 Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.632003 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.633051 5017 generic.go:334] "Generic (PLEG): container finished" podID="9a2bc012-7119-4c7b-b236-e508f10b47c1" containerID="33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58" exitCode=0 Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.633100 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" event={"ID":"9a2bc012-7119-4c7b-b236-e508f10b47c1","Type":"ContainerDied","Data":"33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58"} Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.633152 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" event={"ID":"9a2bc012-7119-4c7b-b236-e508f10b47c1","Type":"ContainerDied","Data":"4b170ace5d82b750333e634c5a47191494ecf455ef99370743b9da97a60ebd5e"} Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.633171 5017 scope.go:117] "RemoveContainer" containerID="33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.719904 5017 scope.go:117] "RemoveContainer" containerID="2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.727149 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-sb\") pod \"9a2bc012-7119-4c7b-b236-e508f10b47c1\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.727195 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-nb\") pod \"9a2bc012-7119-4c7b-b236-e508f10b47c1\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.727262 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-config\") pod \"9a2bc012-7119-4c7b-b236-e508f10b47c1\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.727686 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtdxb\" (UniqueName: \"kubernetes.io/projected/9a2bc012-7119-4c7b-b236-e508f10b47c1-kube-api-access-wtdxb\") pod \"9a2bc012-7119-4c7b-b236-e508f10b47c1\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.727737 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-dns-svc\") pod \"9a2bc012-7119-4c7b-b236-e508f10b47c1\" (UID: \"9a2bc012-7119-4c7b-b236-e508f10b47c1\") " Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.740700 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2bc012-7119-4c7b-b236-e508f10b47c1-kube-api-access-wtdxb" (OuterVolumeSpecName: "kube-api-access-wtdxb") pod "9a2bc012-7119-4c7b-b236-e508f10b47c1" (UID: "9a2bc012-7119-4c7b-b236-e508f10b47c1"). InnerVolumeSpecName "kube-api-access-wtdxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.797195 5017 scope.go:117] "RemoveContainer" containerID="33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58" Jan 29 08:16:36 crc kubenswrapper[5017]: E0129 08:16:36.798563 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58\": container with ID starting with 33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58 not found: ID does not exist" containerID="33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.798613 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58"} err="failed to get container status \"33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58\": rpc error: code = NotFound desc = could not find container \"33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58\": container with ID starting with 33971f1d23770547aef8e44e1d4d1e650df2ed634a94345a34d99c429fc96a58 not found: ID does not exist" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.798637 5017 scope.go:117] "RemoveContainer" containerID="2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b" Jan 29 08:16:36 crc kubenswrapper[5017]: E0129 08:16:36.798925 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b\": container with ID starting with 2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b not found: ID does not exist" containerID="2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.798951 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b"} err="failed to get container status \"2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b\": rpc error: code = NotFound desc = could not find container \"2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b\": container with ID starting with 2c6f03ed0fc119c6c701baeee3dfdd5481b8f99b716f6bdedeba6631de12d96b not found: ID does not exist" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.810283 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a2bc012-7119-4c7b-b236-e508f10b47c1" (UID: "9a2bc012-7119-4c7b-b236-e508f10b47c1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.831010 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtdxb\" (UniqueName: \"kubernetes.io/projected/9a2bc012-7119-4c7b-b236-e508f10b47c1-kube-api-access-wtdxb\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.831041 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.909825 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a2bc012-7119-4c7b-b236-e508f10b47c1" (UID: "9a2bc012-7119-4c7b-b236-e508f10b47c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.910645 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-config" (OuterVolumeSpecName: "config") pod "9a2bc012-7119-4c7b-b236-e508f10b47c1" (UID: "9a2bc012-7119-4c7b-b236-e508f10b47c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.933692 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.934013 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:36 crc kubenswrapper[5017]: I0129 08:16:36.958974 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9a2bc012-7119-4c7b-b236-e508f10b47c1" (UID: "9a2bc012-7119-4c7b-b236-e508f10b47c1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:37 crc kubenswrapper[5017]: I0129 08:16:37.033233 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gstsq"] Jan 29 08:16:37 crc kubenswrapper[5017]: I0129 08:16:37.035876 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a2bc012-7119-4c7b-b236-e508f10b47c1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:37 crc kubenswrapper[5017]: I0129 08:16:37.046510 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gstsq"] Jan 29 08:16:37 crc kubenswrapper[5017]: I0129 08:16:37.646334 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6575ddf6cf-qhgvn" Jan 29 08:16:37 crc kubenswrapper[5017]: I0129 08:16:37.690439 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6575ddf6cf-qhgvn"] Jan 29 08:16:37 crc kubenswrapper[5017]: I0129 08:16:37.706074 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6575ddf6cf-qhgvn"] Jan 29 08:16:38 crc kubenswrapper[5017]: I0129 08:16:38.288503 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-424zb" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="registry-server" probeResult="failure" output=< Jan 29 08:16:38 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:16:38 crc kubenswrapper[5017]: > Jan 29 08:16:38 crc kubenswrapper[5017]: I0129 08:16:38.335401 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4209978a-668e-40a2-80da-5dbd9b790e94" path="/var/lib/kubelet/pods/4209978a-668e-40a2-80da-5dbd9b790e94/volumes" Jan 29 08:16:38 crc kubenswrapper[5017]: I0129 08:16:38.336769 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2bc012-7119-4c7b-b236-e508f10b47c1" path="/var/lib/kubelet/pods/9a2bc012-7119-4c7b-b236-e508f10b47c1/volumes" Jan 29 08:16:38 crc kubenswrapper[5017]: I0129 08:16:38.788341 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:38 crc kubenswrapper[5017]: I0129 08:16:38.788704 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="ceilometer-central-agent" containerID="cri-o://a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412" gracePeriod=30 Jan 29 08:16:38 crc kubenswrapper[5017]: I0129 08:16:38.789276 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="proxy-httpd" containerID="cri-o://9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508" gracePeriod=30 Jan 29 08:16:38 crc kubenswrapper[5017]: I0129 08:16:38.789332 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="sg-core" containerID="cri-o://faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0" gracePeriod=30 Jan 29 08:16:38 crc kubenswrapper[5017]: I0129 08:16:38.789373 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="ceilometer-notification-agent" containerID="cri-o://5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d" gracePeriod=30 Jan 29 08:16:39 crc kubenswrapper[5017]: I0129 08:16:39.671084 5017 generic.go:334] "Generic (PLEG): container finished" podID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerID="9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508" exitCode=0 Jan 29 08:16:39 crc kubenswrapper[5017]: I0129 08:16:39.671530 5017 generic.go:334] "Generic (PLEG): container finished" podID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerID="faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0" exitCode=2 Jan 29 08:16:39 crc kubenswrapper[5017]: I0129 08:16:39.671548 5017 generic.go:334] "Generic (PLEG): container finished" podID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerID="a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412" exitCode=0 Jan 29 08:16:39 crc kubenswrapper[5017]: I0129 08:16:39.671301 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerDied","Data":"9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508"} Jan 29 08:16:39 crc kubenswrapper[5017]: I0129 08:16:39.671593 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerDied","Data":"faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0"} Jan 29 08:16:39 crc kubenswrapper[5017]: I0129 08:16:39.671608 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerDied","Data":"a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412"} Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.689907 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.708613 5017 generic.go:334] "Generic (PLEG): container finished" podID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerID="5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d" exitCode=0 Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.708669 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerDied","Data":"5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d"} Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.708703 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07f0862a-a973-4343-a72a-ebe0f68b68ef","Type":"ContainerDied","Data":"697446214fc677e9404e5cfec3d0c11cd9b172028b5e28fe9ebeb7da6665fae8"} Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.708728 5017 scope.go:117] "RemoveContainer" containerID="9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.708898 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.744137 5017 scope.go:117] "RemoveContainer" containerID="faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.775735 5017 scope.go:117] "RemoveContainer" containerID="5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.788496 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncvlg\" (UniqueName: \"kubernetes.io/projected/07f0862a-a973-4343-a72a-ebe0f68b68ef-kube-api-access-ncvlg\") pod \"07f0862a-a973-4343-a72a-ebe0f68b68ef\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.788540 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-config-data\") pod \"07f0862a-a973-4343-a72a-ebe0f68b68ef\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.788714 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-sg-core-conf-yaml\") pod \"07f0862a-a973-4343-a72a-ebe0f68b68ef\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.788834 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-log-httpd\") pod \"07f0862a-a973-4343-a72a-ebe0f68b68ef\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.788873 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-scripts\") pod \"07f0862a-a973-4343-a72a-ebe0f68b68ef\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.788919 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-combined-ca-bundle\") pod \"07f0862a-a973-4343-a72a-ebe0f68b68ef\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.789011 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-run-httpd\") pod \"07f0862a-a973-4343-a72a-ebe0f68b68ef\" (UID: \"07f0862a-a973-4343-a72a-ebe0f68b68ef\") " Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.790295 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07f0862a-a973-4343-a72a-ebe0f68b68ef" (UID: "07f0862a-a973-4343-a72a-ebe0f68b68ef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.790557 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07f0862a-a973-4343-a72a-ebe0f68b68ef" (UID: "07f0862a-a973-4343-a72a-ebe0f68b68ef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.801120 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-scripts" (OuterVolumeSpecName: "scripts") pod "07f0862a-a973-4343-a72a-ebe0f68b68ef" (UID: "07f0862a-a973-4343-a72a-ebe0f68b68ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.821104 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f0862a-a973-4343-a72a-ebe0f68b68ef-kube-api-access-ncvlg" (OuterVolumeSpecName: "kube-api-access-ncvlg") pod "07f0862a-a973-4343-a72a-ebe0f68b68ef" (UID: "07f0862a-a973-4343-a72a-ebe0f68b68ef"). InnerVolumeSpecName "kube-api-access-ncvlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.840666 5017 scope.go:117] "RemoveContainer" containerID="a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.858663 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07f0862a-a973-4343-a72a-ebe0f68b68ef" (UID: "07f0862a-a973-4343-a72a-ebe0f68b68ef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.892087 5017 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.892131 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncvlg\" (UniqueName: \"kubernetes.io/projected/07f0862a-a973-4343-a72a-ebe0f68b68ef-kube-api-access-ncvlg\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.892147 5017 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.892156 5017 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07f0862a-a973-4343-a72a-ebe0f68b68ef-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.892165 5017 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.924408 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07f0862a-a973-4343-a72a-ebe0f68b68ef" (UID: "07f0862a-a973-4343-a72a-ebe0f68b68ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.938133 5017 scope.go:117] "RemoveContainer" containerID="9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508" Jan 29 08:16:42 crc kubenswrapper[5017]: E0129 08:16:42.938868 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508\": container with ID starting with 9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508 not found: ID does not exist" containerID="9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.938899 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508"} err="failed to get container status \"9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508\": rpc error: code = NotFound desc = could not find container \"9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508\": container with ID starting with 9f3c6ae6defb8d8e0150229446e0749c0ec6196d91be532573a2f2847a5c4508 not found: ID does not exist" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.938927 5017 scope.go:117] "RemoveContainer" containerID="faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0" Jan 29 08:16:42 crc kubenswrapper[5017]: E0129 08:16:42.939914 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0\": container with ID starting with faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0 not found: ID does not exist" containerID="faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.939947 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0"} err="failed to get container status \"faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0\": rpc error: code = NotFound desc = could not find container \"faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0\": container with ID starting with faa594ed063f8ce0d042573f797cca2dd285c819fe14a650f3a6e0b3f9c577c0 not found: ID does not exist" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.939991 5017 scope.go:117] "RemoveContainer" containerID="5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d" Jan 29 08:16:42 crc kubenswrapper[5017]: E0129 08:16:42.940459 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d\": container with ID starting with 5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d not found: ID does not exist" containerID="5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.940562 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d"} err="failed to get container status \"5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d\": rpc error: code = NotFound desc = could not find container \"5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d\": container with ID starting with 5df8560d227e99c9fdfd2e024f2a65784256d65ef93de805a2b53b38a603037d not found: ID does not exist" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.940649 5017 scope.go:117] "RemoveContainer" containerID="a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.942025 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-config-data" (OuterVolumeSpecName: "config-data") pod "07f0862a-a973-4343-a72a-ebe0f68b68ef" (UID: "07f0862a-a973-4343-a72a-ebe0f68b68ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:42 crc kubenswrapper[5017]: E0129 08:16:42.942117 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412\": container with ID starting with a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412 not found: ID does not exist" containerID="a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.942203 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412"} err="failed to get container status \"a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412\": rpc error: code = NotFound desc = could not find container \"a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412\": container with ID starting with a892477101947df504151c66842518c691071f2f85d045e4b1277e98a87d1412 not found: ID does not exist" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.994368 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:42 crc kubenswrapper[5017]: I0129 08:16:42.994555 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f0862a-a973-4343-a72a-ebe0f68b68ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.055039 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.072317 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.083738 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:43 crc kubenswrapper[5017]: E0129 08:16:43.084288 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="ceilometer-central-agent" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084311 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="ceilometer-central-agent" Jan 29 08:16:43 crc kubenswrapper[5017]: E0129 08:16:43.084335 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="proxy-httpd" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084343 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="proxy-httpd" Jan 29 08:16:43 crc kubenswrapper[5017]: E0129 08:16:43.084353 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2bc012-7119-4c7b-b236-e508f10b47c1" containerName="dnsmasq-dns" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084362 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2bc012-7119-4c7b-b236-e508f10b47c1" containerName="dnsmasq-dns" Jan 29 08:16:43 crc kubenswrapper[5017]: E0129 08:16:43.084373 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="sg-core" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084381 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="sg-core" Jan 29 08:16:43 crc kubenswrapper[5017]: E0129 08:16:43.084416 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="ceilometer-notification-agent" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084423 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="ceilometer-notification-agent" Jan 29 08:16:43 crc kubenswrapper[5017]: E0129 08:16:43.084434 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2bc012-7119-4c7b-b236-e508f10b47c1" containerName="init" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084439 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2bc012-7119-4c7b-b236-e508f10b47c1" containerName="init" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084634 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="proxy-httpd" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084650 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="sg-core" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084665 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2bc012-7119-4c7b-b236-e508f10b47c1" containerName="dnsmasq-dns" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084673 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="ceilometer-notification-agent" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.084686 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" containerName="ceilometer-central-agent" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.087054 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.090220 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.090543 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.099879 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.198666 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-config-data\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.198774 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-scripts\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.199441 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/589870c6-6dab-437b-8fa5-9bbc3106a94d-log-httpd\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.199537 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.199637 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/589870c6-6dab-437b-8fa5-9bbc3106a94d-run-httpd\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.199680 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.199733 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69vhj\" (UniqueName: \"kubernetes.io/projected/589870c6-6dab-437b-8fa5-9bbc3106a94d-kube-api-access-69vhj\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.302178 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/589870c6-6dab-437b-8fa5-9bbc3106a94d-log-httpd\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.302724 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.302661 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/589870c6-6dab-437b-8fa5-9bbc3106a94d-log-httpd\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.303398 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/589870c6-6dab-437b-8fa5-9bbc3106a94d-run-httpd\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.303492 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.303535 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69vhj\" (UniqueName: \"kubernetes.io/projected/589870c6-6dab-437b-8fa5-9bbc3106a94d-kube-api-access-69vhj\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.303560 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-config-data\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.303603 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-scripts\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.303673 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/589870c6-6dab-437b-8fa5-9bbc3106a94d-run-httpd\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.306548 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.307500 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.310797 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-config-data\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.313405 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/589870c6-6dab-437b-8fa5-9bbc3106a94d-scripts\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.324540 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69vhj\" (UniqueName: \"kubernetes.io/projected/589870c6-6dab-437b-8fa5-9bbc3106a94d-kube-api-access-69vhj\") pod \"ceilometer-0\" (UID: \"589870c6-6dab-437b-8fa5-9bbc3106a94d\") " pod="openstack/ceilometer-0" Jan 29 08:16:43 crc kubenswrapper[5017]: I0129 08:16:43.464341 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:16:44 crc kubenswrapper[5017]: I0129 08:16:44.030584 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:16:44 crc kubenswrapper[5017]: I0129 08:16:44.331325 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f0862a-a973-4343-a72a-ebe0f68b68ef" path="/var/lib/kubelet/pods/07f0862a-a973-4343-a72a-ebe0f68b68ef/volumes" Jan 29 08:16:44 crc kubenswrapper[5017]: I0129 08:16:44.775259 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"589870c6-6dab-437b-8fa5-9bbc3106a94d","Type":"ContainerStarted","Data":"74499e1bdacf412e70c480a9338525bb681ce246e74950792706586ff8e657fe"} Jan 29 08:16:45 crc kubenswrapper[5017]: I0129 08:16:45.801654 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"589870c6-6dab-437b-8fa5-9bbc3106a94d","Type":"ContainerStarted","Data":"7be4a0504a1fd3d242a540d80cde70c98e4ae1f21750be58d3a522b782670f36"} Jan 29 08:16:46 crc kubenswrapper[5017]: I0129 08:16:46.816145 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"589870c6-6dab-437b-8fa5-9bbc3106a94d","Type":"ContainerStarted","Data":"5aa1495ee35ab32c02cd26fc608498de34c8a3f1619a8c3236fb1d946a6b4bdd"} Jan 29 08:16:46 crc kubenswrapper[5017]: I0129 08:16:46.817142 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"589870c6-6dab-437b-8fa5-9bbc3106a94d","Type":"ContainerStarted","Data":"07c19f380382edeaf6e1353a159998ef7b3f1598beee04454f0db7684e9ea39b"} Jan 29 08:16:47 crc kubenswrapper[5017]: I0129 08:16:47.291943 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:47 crc kubenswrapper[5017]: I0129 08:16:47.376204 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:47 crc kubenswrapper[5017]: I0129 08:16:47.535876 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-424zb"] Jan 29 08:16:47 crc kubenswrapper[5017]: I0129 08:16:47.704988 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 29 08:16:48 crc kubenswrapper[5017]: I0129 08:16:48.061358 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 29 08:16:48 crc kubenswrapper[5017]: I0129 08:16:48.117964 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 29 08:16:48 crc kubenswrapper[5017]: I0129 08:16:48.840887 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-424zb" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="registry-server" containerID="cri-o://750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f" gracePeriod=2 Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.402690 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.474010 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-utilities\") pod \"c0ba96a1-7a56-4351-b801-b4c7885e0445\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.474441 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjkt2\" (UniqueName: \"kubernetes.io/projected/c0ba96a1-7a56-4351-b801-b4c7885e0445-kube-api-access-sjkt2\") pod \"c0ba96a1-7a56-4351-b801-b4c7885e0445\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.474507 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-catalog-content\") pod \"c0ba96a1-7a56-4351-b801-b4c7885e0445\" (UID: \"c0ba96a1-7a56-4351-b801-b4c7885e0445\") " Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.475498 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-utilities" (OuterVolumeSpecName: "utilities") pod "c0ba96a1-7a56-4351-b801-b4c7885e0445" (UID: "c0ba96a1-7a56-4351-b801-b4c7885e0445"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.482138 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ba96a1-7a56-4351-b801-b4c7885e0445-kube-api-access-sjkt2" (OuterVolumeSpecName: "kube-api-access-sjkt2") pod "c0ba96a1-7a56-4351-b801-b4c7885e0445" (UID: "c0ba96a1-7a56-4351-b801-b4c7885e0445"). InnerVolumeSpecName "kube-api-access-sjkt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.577445 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjkt2\" (UniqueName: \"kubernetes.io/projected/c0ba96a1-7a56-4351-b801-b4c7885e0445-kube-api-access-sjkt2\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.577477 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.633987 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0ba96a1-7a56-4351-b801-b4c7885e0445" (UID: "c0ba96a1-7a56-4351-b801-b4c7885e0445"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.679704 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ba96a1-7a56-4351-b801-b4c7885e0445-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.855785 5017 generic.go:334] "Generic (PLEG): container finished" podID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerID="750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f" exitCode=0 Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.855850 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-424zb" event={"ID":"c0ba96a1-7a56-4351-b801-b4c7885e0445","Type":"ContainerDied","Data":"750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f"} Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.855920 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-424zb" event={"ID":"c0ba96a1-7a56-4351-b801-b4c7885e0445","Type":"ContainerDied","Data":"e8b615ce433820523ba677f2cbaedfdcba7e6260cdefdde4ecbf9394361627dd"} Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.855943 5017 scope.go:117] "RemoveContainer" containerID="750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.856354 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-424zb" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.859552 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"589870c6-6dab-437b-8fa5-9bbc3106a94d","Type":"ContainerStarted","Data":"c1a3687d218086ab01aeac8b27162f3f08eb9021fb26fdbc512edbc73a53d55b"} Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.859807 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.886240 5017 scope.go:117] "RemoveContainer" containerID="44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.902560 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.369628767 podStartE2EDuration="6.902534022s" podCreationTimestamp="2026-01-29 08:16:43 +0000 UTC" firstStartedPulling="2026-01-29 08:16:44.035091267 +0000 UTC m=+6090.409538877" lastFinishedPulling="2026-01-29 08:16:48.567996522 +0000 UTC m=+6094.942444132" observedRunningTime="2026-01-29 08:16:49.890329847 +0000 UTC m=+6096.264777457" watchObservedRunningTime="2026-01-29 08:16:49.902534022 +0000 UTC m=+6096.276981642" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.930520 5017 scope.go:117] "RemoveContainer" containerID="8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.936450 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-424zb"] Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.952197 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-424zb"] Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.984831 5017 scope.go:117] "RemoveContainer" containerID="750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f" Jan 29 08:16:49 crc kubenswrapper[5017]: E0129 08:16:49.992610 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f\": container with ID starting with 750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f not found: ID does not exist" containerID="750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.992664 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f"} err="failed to get container status \"750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f\": rpc error: code = NotFound desc = could not find container \"750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f\": container with ID starting with 750291f05b236c614e498bc9ee282dcb1dced4feb0a65ad23f1895d92289231f not found: ID does not exist" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.992699 5017 scope.go:117] "RemoveContainer" containerID="44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7" Jan 29 08:16:49 crc kubenswrapper[5017]: E0129 08:16:49.993207 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7\": container with ID starting with 44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7 not found: ID does not exist" containerID="44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.993250 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7"} err="failed to get container status \"44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7\": rpc error: code = NotFound desc = could not find container \"44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7\": container with ID starting with 44abedb5ce56603b580ba6406d2a1098fa596affd62ca19a2ee10a937cc014d7 not found: ID does not exist" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.993285 5017 scope.go:117] "RemoveContainer" containerID="8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21" Jan 29 08:16:49 crc kubenswrapper[5017]: E0129 08:16:49.993846 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21\": container with ID starting with 8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21 not found: ID does not exist" containerID="8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21" Jan 29 08:16:49 crc kubenswrapper[5017]: I0129 08:16:49.993916 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21"} err="failed to get container status \"8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21\": rpc error: code = NotFound desc = could not find container \"8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21\": container with ID starting with 8b404138aaea5eda11a9f57fcef73f53220e363f933c9f801c28032d0acd2e21 not found: ID does not exist" Jan 29 08:16:50 crc kubenswrapper[5017]: I0129 08:16:50.344703 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" path="/var/lib/kubelet/pods/c0ba96a1-7a56-4351-b801-b4c7885e0445/volumes" Jan 29 08:16:56 crc kubenswrapper[5017]: I0129 08:16:56.047431 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbgqf"] Jan 29 08:16:56 crc kubenswrapper[5017]: I0129 08:16:56.066215 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fbgqf"] Jan 29 08:16:56 crc kubenswrapper[5017]: I0129 08:16:56.331533 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec216ce-93c5-4f31-8b8d-c05cfe023664" path="/var/lib/kubelet/pods/4ec216ce-93c5-4f31-8b8d-c05cfe023664/volumes" Jan 29 08:16:58 crc kubenswrapper[5017]: I0129 08:16:58.045172 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pld5z"] Jan 29 08:16:58 crc kubenswrapper[5017]: I0129 08:16:58.057418 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pld5z"] Jan 29 08:16:58 crc kubenswrapper[5017]: I0129 08:16:58.332674 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04718292-6393-48d0-9428-127033978d5a" path="/var/lib/kubelet/pods/04718292-6393-48d0-9428-127033978d5a/volumes" Jan 29 08:17:13 crc kubenswrapper[5017]: I0129 08:17:13.042692 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ds8jd"] Jan 29 08:17:13 crc kubenswrapper[5017]: I0129 08:17:13.057566 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ds8jd"] Jan 29 08:17:13 crc kubenswrapper[5017]: I0129 08:17:13.472429 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 08:17:14 crc kubenswrapper[5017]: I0129 08:17:14.333325 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf35922-c42a-4ea4-9d61-670469b4512a" path="/var/lib/kubelet/pods/adf35922-c42a-4ea4-9d61-670469b4512a/volumes" Jan 29 08:17:26 crc kubenswrapper[5017]: I0129 08:17:26.539387 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:17:26 crc kubenswrapper[5017]: I0129 08:17:26.540414 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.478439 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688d97fbb5-hz7fq"] Jan 29 08:17:33 crc kubenswrapper[5017]: E0129 08:17:33.480222 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="extract-content" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.480245 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="extract-content" Jan 29 08:17:33 crc kubenswrapper[5017]: E0129 08:17:33.480280 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="registry-server" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.480288 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="registry-server" Jan 29 08:17:33 crc kubenswrapper[5017]: E0129 08:17:33.480318 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="extract-utilities" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.480327 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="extract-utilities" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.480612 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ba96a1-7a56-4351-b801-b4c7885e0445" containerName="registry-server" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.482532 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.490626 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.522775 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688d97fbb5-hz7fq"] Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.671119 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-openstack-cell1\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.671648 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-dns-svc\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.671702 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-nb\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.671742 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-config\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.672250 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tpxg\" (UniqueName: \"kubernetes.io/projected/3c1be082-d2ef-47c9-8287-c09eb9995524-kube-api-access-8tpxg\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.672555 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-sb\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.775490 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tpxg\" (UniqueName: \"kubernetes.io/projected/3c1be082-d2ef-47c9-8287-c09eb9995524-kube-api-access-8tpxg\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.775576 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-sb\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.775630 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-openstack-cell1\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.775699 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-dns-svc\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.775749 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-nb\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.775793 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-config\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.777185 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-openstack-cell1\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.777654 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-nb\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.777913 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-sb\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.779976 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-dns-svc\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.781227 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-config\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:33 crc kubenswrapper[5017]: I0129 08:17:33.812810 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tpxg\" (UniqueName: \"kubernetes.io/projected/3c1be082-d2ef-47c9-8287-c09eb9995524-kube-api-access-8tpxg\") pod \"dnsmasq-dns-688d97fbb5-hz7fq\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:34 crc kubenswrapper[5017]: I0129 08:17:34.113476 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:34 crc kubenswrapper[5017]: I0129 08:17:34.702783 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688d97fbb5-hz7fq"] Jan 29 08:17:34 crc kubenswrapper[5017]: I0129 08:17:34.981710 5017 scope.go:117] "RemoveContainer" containerID="0f5621f628677a551cda6250355e03dfb283863c189f11b417fe65d8d99430f4" Jan 29 08:17:35 crc kubenswrapper[5017]: I0129 08:17:35.031921 5017 scope.go:117] "RemoveContainer" containerID="e4258c445c5125c41dc2ccb7d359954856dc2e770320b3a9017df8ea7caaacf4" Jan 29 08:17:35 crc kubenswrapper[5017]: I0129 08:17:35.160707 5017 scope.go:117] "RemoveContainer" containerID="99ae773943c31d04b6d790a18468c00cc81e246935cdbe6dc6c0698fa9f56f20" Jan 29 08:17:35 crc kubenswrapper[5017]: I0129 08:17:35.211561 5017 scope.go:117] "RemoveContainer" containerID="4ed07e67425ba675e32f83ef1f025dc546aeaa3feba3ad2510ccf9a02f16c6c0" Jan 29 08:17:35 crc kubenswrapper[5017]: I0129 08:17:35.255946 5017 scope.go:117] "RemoveContainer" containerID="43fcc94a563999101b63549864c3c447ba0b9c05460eb5b950f5bb3200509006" Jan 29 08:17:35 crc kubenswrapper[5017]: I0129 08:17:35.415265 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c1be082-d2ef-47c9-8287-c09eb9995524" containerID="f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186" exitCode=0 Jan 29 08:17:35 crc kubenswrapper[5017]: I0129 08:17:35.415363 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" event={"ID":"3c1be082-d2ef-47c9-8287-c09eb9995524","Type":"ContainerDied","Data":"f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186"} Jan 29 08:17:35 crc kubenswrapper[5017]: I0129 08:17:35.415503 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" event={"ID":"3c1be082-d2ef-47c9-8287-c09eb9995524","Type":"ContainerStarted","Data":"406065c7fee6178a18fea1f222815d625ce764807d2e29eebb167e372358f2db"} Jan 29 08:17:36 crc kubenswrapper[5017]: I0129 08:17:36.431553 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" event={"ID":"3c1be082-d2ef-47c9-8287-c09eb9995524","Type":"ContainerStarted","Data":"0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045"} Jan 29 08:17:36 crc kubenswrapper[5017]: I0129 08:17:36.432569 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:36 crc kubenswrapper[5017]: I0129 08:17:36.463651 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" podStartSLOduration=3.463614856 podStartE2EDuration="3.463614856s" podCreationTimestamp="2026-01-29 08:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:36.455273955 +0000 UTC m=+6142.829721585" watchObservedRunningTime="2026-01-29 08:17:36.463614856 +0000 UTC m=+6142.838062466" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.115121 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.196235 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bdbdd675-6qdkz"] Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.196522 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" podUID="44794862-f9c4-403b-bc6e-f3dd9c42cd85" containerName="dnsmasq-dns" containerID="cri-o://a58b1211e06163f8b04c4540430e8ffc16c5b0709805dfb163719aaf3a69ca0a" gracePeriod=10 Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.330649 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8dfb7fbdc-cwkhx"] Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.335111 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.350550 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8dfb7fbdc-cwkhx"] Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.393766 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqdbf\" (UniqueName: \"kubernetes.io/projected/c82d19ec-3af4-4ed8-b801-99b119fcfa53-kube-api-access-vqdbf\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.393939 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-openstack-cell1\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.395317 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-dns-svc\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.395375 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-ovsdbserver-sb\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.395406 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-config\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.395424 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-ovsdbserver-nb\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.498196 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqdbf\" (UniqueName: \"kubernetes.io/projected/c82d19ec-3af4-4ed8-b801-99b119fcfa53-kube-api-access-vqdbf\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.503231 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-openstack-cell1\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.503352 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-dns-svc\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.503418 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-ovsdbserver-sb\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.503444 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-config\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.503461 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-ovsdbserver-nb\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.504275 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-openstack-cell1\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.505100 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-ovsdbserver-nb\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.505727 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-config\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.506716 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-dns-svc\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.517021 5017 generic.go:334] "Generic (PLEG): container finished" podID="44794862-f9c4-403b-bc6e-f3dd9c42cd85" containerID="a58b1211e06163f8b04c4540430e8ffc16c5b0709805dfb163719aaf3a69ca0a" exitCode=0 Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.517082 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" event={"ID":"44794862-f9c4-403b-bc6e-f3dd9c42cd85","Type":"ContainerDied","Data":"a58b1211e06163f8b04c4540430e8ffc16c5b0709805dfb163719aaf3a69ca0a"} Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.519307 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c82d19ec-3af4-4ed8-b801-99b119fcfa53-ovsdbserver-sb\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.525918 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqdbf\" (UniqueName: \"kubernetes.io/projected/c82d19ec-3af4-4ed8-b801-99b119fcfa53-kube-api-access-vqdbf\") pod \"dnsmasq-dns-8dfb7fbdc-cwkhx\" (UID: \"c82d19ec-3af4-4ed8-b801-99b119fcfa53\") " pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.729106 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.868720 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.914917 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-sb\") pod \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.915459 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-config\") pod \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.915548 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-nb\") pod \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.915564 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-dns-svc\") pod \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.915592 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z5x7\" (UniqueName: \"kubernetes.io/projected/44794862-f9c4-403b-bc6e-f3dd9c42cd85-kube-api-access-2z5x7\") pod \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\" (UID: \"44794862-f9c4-403b-bc6e-f3dd9c42cd85\") " Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.923290 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44794862-f9c4-403b-bc6e-f3dd9c42cd85-kube-api-access-2z5x7" (OuterVolumeSpecName: "kube-api-access-2z5x7") pod "44794862-f9c4-403b-bc6e-f3dd9c42cd85" (UID: "44794862-f9c4-403b-bc6e-f3dd9c42cd85"). InnerVolumeSpecName "kube-api-access-2z5x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:44 crc kubenswrapper[5017]: I0129 08:17:44.977058 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44794862-f9c4-403b-bc6e-f3dd9c42cd85" (UID: "44794862-f9c4-403b-bc6e-f3dd9c42cd85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.009222 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44794862-f9c4-403b-bc6e-f3dd9c42cd85" (UID: "44794862-f9c4-403b-bc6e-f3dd9c42cd85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.022915 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-config" (OuterVolumeSpecName: "config") pod "44794862-f9c4-403b-bc6e-f3dd9c42cd85" (UID: "44794862-f9c4-403b-bc6e-f3dd9c42cd85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.029351 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.029650 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.029728 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.029796 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z5x7\" (UniqueName: \"kubernetes.io/projected/44794862-f9c4-403b-bc6e-f3dd9c42cd85-kube-api-access-2z5x7\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.036308 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44794862-f9c4-403b-bc6e-f3dd9c42cd85" (UID: "44794862-f9c4-403b-bc6e-f3dd9c42cd85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.132216 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44794862-f9c4-403b-bc6e-f3dd9c42cd85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.285281 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8dfb7fbdc-cwkhx"] Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.529512 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" event={"ID":"c82d19ec-3af4-4ed8-b801-99b119fcfa53","Type":"ContainerStarted","Data":"6a4d21fb59fc6065cb25955d13cb3f513b819407817cab96f39da75678f53492"} Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.532602 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" event={"ID":"44794862-f9c4-403b-bc6e-f3dd9c42cd85","Type":"ContainerDied","Data":"dbdda98008ea7f3824a77219aa6ad7c156929fbcfd3e7afa6df54efaf758d137"} Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.532668 5017 scope.go:117] "RemoveContainer" containerID="a58b1211e06163f8b04c4540430e8ffc16c5b0709805dfb163719aaf3a69ca0a" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.532691 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bdbdd675-6qdkz" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.563126 5017 scope.go:117] "RemoveContainer" containerID="8f5c8f9e02de79670b29a75ae774a9f9a4367b07ed6b22bf4bdb14f7a508623f" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.583774 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bdbdd675-6qdkz"] Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.593262 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bdbdd675-6qdkz"] Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.956358 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gg6jz"] Jan 29 08:17:45 crc kubenswrapper[5017]: E0129 08:17:45.957439 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44794862-f9c4-403b-bc6e-f3dd9c42cd85" containerName="init" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.957468 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="44794862-f9c4-403b-bc6e-f3dd9c42cd85" containerName="init" Jan 29 08:17:45 crc kubenswrapper[5017]: E0129 08:17:45.957493 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44794862-f9c4-403b-bc6e-f3dd9c42cd85" containerName="dnsmasq-dns" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.957501 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="44794862-f9c4-403b-bc6e-f3dd9c42cd85" containerName="dnsmasq-dns" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.957749 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="44794862-f9c4-403b-bc6e-f3dd9c42cd85" containerName="dnsmasq-dns" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.959637 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:45 crc kubenswrapper[5017]: I0129 08:17:45.997823 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gg6jz"] Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.095236 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r65gt\" (UniqueName: \"kubernetes.io/projected/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-kube-api-access-r65gt\") pod \"community-operators-gg6jz\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.095324 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-catalog-content\") pod \"community-operators-gg6jz\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.095363 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-utilities\") pod \"community-operators-gg6jz\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.198055 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-catalog-content\") pod \"community-operators-gg6jz\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.198139 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-utilities\") pod \"community-operators-gg6jz\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.198300 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r65gt\" (UniqueName: \"kubernetes.io/projected/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-kube-api-access-r65gt\") pod \"community-operators-gg6jz\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.198743 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-catalog-content\") pod \"community-operators-gg6jz\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.198852 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-utilities\") pod \"community-operators-gg6jz\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.230255 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r65gt\" (UniqueName: \"kubernetes.io/projected/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-kube-api-access-r65gt\") pod \"community-operators-gg6jz\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.291967 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.338908 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44794862-f9c4-403b-bc6e-f3dd9c42cd85" path="/var/lib/kubelet/pods/44794862-f9c4-403b-bc6e-f3dd9c42cd85/volumes" Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.563417 5017 generic.go:334] "Generic (PLEG): container finished" podID="c82d19ec-3af4-4ed8-b801-99b119fcfa53" containerID="3b0ee3b39a40f16ab117dc160bc37ec449c9c0e09e5d0497bac558f5be6c3a07" exitCode=0 Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.564939 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" event={"ID":"c82d19ec-3af4-4ed8-b801-99b119fcfa53","Type":"ContainerDied","Data":"3b0ee3b39a40f16ab117dc160bc37ec449c9c0e09e5d0497bac558f5be6c3a07"} Jan 29 08:17:46 crc kubenswrapper[5017]: I0129 08:17:46.755000 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gg6jz"] Jan 29 08:17:46 crc kubenswrapper[5017]: W0129 08:17:46.845185 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f7ca3a2_165a_4d12_b5c2_eabe09f9c43c.slice/crio-da6e7cf20a95b25561b43c03126bf5905f838a2ae50a156b09981a54ae99995c WatchSource:0}: Error finding container da6e7cf20a95b25561b43c03126bf5905f838a2ae50a156b09981a54ae99995c: Status 404 returned error can't find the container with id da6e7cf20a95b25561b43c03126bf5905f838a2ae50a156b09981a54ae99995c Jan 29 08:17:47 crc kubenswrapper[5017]: I0129 08:17:47.592176 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" event={"ID":"c82d19ec-3af4-4ed8-b801-99b119fcfa53","Type":"ContainerStarted","Data":"6c12d2860688f994c31c19b0a74fd6b579cfbb4812ae6af58339a9065f5429dd"} Jan 29 08:17:47 crc kubenswrapper[5017]: I0129 08:17:47.592650 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:47 crc kubenswrapper[5017]: I0129 08:17:47.602564 5017 generic.go:334] "Generic (PLEG): container finished" podID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerID="936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a" exitCode=0 Jan 29 08:17:47 crc kubenswrapper[5017]: I0129 08:17:47.602621 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg6jz" event={"ID":"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c","Type":"ContainerDied","Data":"936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a"} Jan 29 08:17:47 crc kubenswrapper[5017]: I0129 08:17:47.602652 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg6jz" event={"ID":"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c","Type":"ContainerStarted","Data":"da6e7cf20a95b25561b43c03126bf5905f838a2ae50a156b09981a54ae99995c"} Jan 29 08:17:47 crc kubenswrapper[5017]: I0129 08:17:47.634574 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" podStartSLOduration=3.634538696 podStartE2EDuration="3.634538696s" podCreationTimestamp="2026-01-29 08:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:47.627253621 +0000 UTC m=+6154.001701231" watchObservedRunningTime="2026-01-29 08:17:47.634538696 +0000 UTC m=+6154.008986316" Jan 29 08:17:48 crc kubenswrapper[5017]: I0129 08:17:48.621710 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg6jz" event={"ID":"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c","Type":"ContainerStarted","Data":"815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf"} Jan 29 08:17:50 crc kubenswrapper[5017]: I0129 08:17:50.650356 5017 generic.go:334] "Generic (PLEG): container finished" podID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerID="815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf" exitCode=0 Jan 29 08:17:50 crc kubenswrapper[5017]: I0129 08:17:50.650468 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg6jz" event={"ID":"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c","Type":"ContainerDied","Data":"815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf"} Jan 29 08:17:51 crc kubenswrapper[5017]: I0129 08:17:51.667048 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg6jz" event={"ID":"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c","Type":"ContainerStarted","Data":"b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915"} Jan 29 08:17:51 crc kubenswrapper[5017]: I0129 08:17:51.695933 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gg6jz" podStartSLOduration=3.243195391 podStartE2EDuration="6.695906022s" podCreationTimestamp="2026-01-29 08:17:45 +0000 UTC" firstStartedPulling="2026-01-29 08:17:47.605083765 +0000 UTC m=+6153.979531375" lastFinishedPulling="2026-01-29 08:17:51.057794406 +0000 UTC m=+6157.432242006" observedRunningTime="2026-01-29 08:17:51.69208269 +0000 UTC m=+6158.066530310" watchObservedRunningTime="2026-01-29 08:17:51.695906022 +0000 UTC m=+6158.070353632" Jan 29 08:17:54 crc kubenswrapper[5017]: I0129 08:17:54.731560 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8dfb7fbdc-cwkhx" Jan 29 08:17:54 crc kubenswrapper[5017]: I0129 08:17:54.822861 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688d97fbb5-hz7fq"] Jan 29 08:17:54 crc kubenswrapper[5017]: I0129 08:17:54.823496 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" podUID="3c1be082-d2ef-47c9-8287-c09eb9995524" containerName="dnsmasq-dns" containerID="cri-o://0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045" gracePeriod=10 Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.096194 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e65f-account-create-update-rxnc4"] Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.127380 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7n55f"] Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.154289 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e65f-account-create-update-rxnc4"] Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.168735 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7n55f"] Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.464860 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.571731 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-openstack-cell1\") pod \"3c1be082-d2ef-47c9-8287-c09eb9995524\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.571825 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tpxg\" (UniqueName: \"kubernetes.io/projected/3c1be082-d2ef-47c9-8287-c09eb9995524-kube-api-access-8tpxg\") pod \"3c1be082-d2ef-47c9-8287-c09eb9995524\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.571883 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-sb\") pod \"3c1be082-d2ef-47c9-8287-c09eb9995524\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.571999 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-dns-svc\") pod \"3c1be082-d2ef-47c9-8287-c09eb9995524\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.572126 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-nb\") pod \"3c1be082-d2ef-47c9-8287-c09eb9995524\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.572297 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-config\") pod \"3c1be082-d2ef-47c9-8287-c09eb9995524\" (UID: \"3c1be082-d2ef-47c9-8287-c09eb9995524\") " Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.580461 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c1be082-d2ef-47c9-8287-c09eb9995524-kube-api-access-8tpxg" (OuterVolumeSpecName: "kube-api-access-8tpxg") pod "3c1be082-d2ef-47c9-8287-c09eb9995524" (UID: "3c1be082-d2ef-47c9-8287-c09eb9995524"). InnerVolumeSpecName "kube-api-access-8tpxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.651146 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "3c1be082-d2ef-47c9-8287-c09eb9995524" (UID: "3c1be082-d2ef-47c9-8287-c09eb9995524"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.651500 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c1be082-d2ef-47c9-8287-c09eb9995524" (UID: "3c1be082-d2ef-47c9-8287-c09eb9995524"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.653581 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-config" (OuterVolumeSpecName: "config") pod "3c1be082-d2ef-47c9-8287-c09eb9995524" (UID: "3c1be082-d2ef-47c9-8287-c09eb9995524"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.655020 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c1be082-d2ef-47c9-8287-c09eb9995524" (UID: "3c1be082-d2ef-47c9-8287-c09eb9995524"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.676593 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c1be082-d2ef-47c9-8287-c09eb9995524" (UID: "3c1be082-d2ef-47c9-8287-c09eb9995524"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.676743 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.676772 5017 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.676802 5017 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.676815 5017 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.676826 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tpxg\" (UniqueName: \"kubernetes.io/projected/3c1be082-d2ef-47c9-8287-c09eb9995524-kube-api-access-8tpxg\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.711917 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c1be082-d2ef-47c9-8287-c09eb9995524" containerID="0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045" exitCode=0 Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.712026 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" event={"ID":"3c1be082-d2ef-47c9-8287-c09eb9995524","Type":"ContainerDied","Data":"0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045"} Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.712104 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" event={"ID":"3c1be082-d2ef-47c9-8287-c09eb9995524","Type":"ContainerDied","Data":"406065c7fee6178a18fea1f222815d625ce764807d2e29eebb167e372358f2db"} Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.712103 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688d97fbb5-hz7fq" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.712130 5017 scope.go:117] "RemoveContainer" containerID="0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.766195 5017 scope.go:117] "RemoveContainer" containerID="f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.784906 5017 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c1be082-d2ef-47c9-8287-c09eb9995524-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.803655 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688d97fbb5-hz7fq"] Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.811992 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688d97fbb5-hz7fq"] Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.824125 5017 scope.go:117] "RemoveContainer" containerID="0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045" Jan 29 08:17:55 crc kubenswrapper[5017]: E0129 08:17:55.829543 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045\": container with ID starting with 0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045 not found: ID does not exist" containerID="0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.829598 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045"} err="failed to get container status \"0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045\": rpc error: code = NotFound desc = could not find container \"0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045\": container with ID starting with 0fdf3051faa472fa69343c1912ca1ef214c15d6eb1d755d67144361ad38e8045 not found: ID does not exist" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.829636 5017 scope.go:117] "RemoveContainer" containerID="f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186" Jan 29 08:17:55 crc kubenswrapper[5017]: E0129 08:17:55.830215 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186\": container with ID starting with f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186 not found: ID does not exist" containerID="f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186" Jan 29 08:17:55 crc kubenswrapper[5017]: I0129 08:17:55.830243 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186"} err="failed to get container status \"f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186\": rpc error: code = NotFound desc = could not find container \"f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186\": container with ID starting with f256a6ee479ede5524214147dbe0a67ae6d75157b22aa728db83c10ba3952186 not found: ID does not exist" Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.292972 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.293564 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.333255 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ad5517-2f39-4db6-8d8a-c5a23e84d1b0" path="/var/lib/kubelet/pods/30ad5517-2f39-4db6-8d8a-c5a23e84d1b0/volumes" Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.335360 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c1be082-d2ef-47c9-8287-c09eb9995524" path="/var/lib/kubelet/pods/3c1be082-d2ef-47c9-8287-c09eb9995524/volumes" Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.336250 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67325a2f-9d88-4646-8f9b-dc89bc6a5bc7" path="/var/lib/kubelet/pods/67325a2f-9d88-4646-8f9b-dc89bc6a5bc7/volumes" Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.364766 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.538996 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.539094 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.781122 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:56 crc kubenswrapper[5017]: I0129 08:17:56.836681 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gg6jz"] Jan 29 08:17:58 crc kubenswrapper[5017]: I0129 08:17:58.746503 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gg6jz" podUID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerName="registry-server" containerID="cri-o://b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915" gracePeriod=2 Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.244990 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.406305 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r65gt\" (UniqueName: \"kubernetes.io/projected/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-kube-api-access-r65gt\") pod \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.406645 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-catalog-content\") pod \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.406690 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-utilities\") pod \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\" (UID: \"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c\") " Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.408324 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-utilities" (OuterVolumeSpecName: "utilities") pod "6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" (UID: "6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.414380 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-kube-api-access-r65gt" (OuterVolumeSpecName: "kube-api-access-r65gt") pod "6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" (UID: "6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c"). InnerVolumeSpecName "kube-api-access-r65gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.474848 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" (UID: "6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.510224 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.510459 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r65gt\" (UniqueName: \"kubernetes.io/projected/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-kube-api-access-r65gt\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.510518 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.758739 5017 generic.go:334] "Generic (PLEG): container finished" podID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerID="b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915" exitCode=0 Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.758789 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg6jz" event={"ID":"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c","Type":"ContainerDied","Data":"b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915"} Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.758824 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gg6jz" event={"ID":"6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c","Type":"ContainerDied","Data":"da6e7cf20a95b25561b43c03126bf5905f838a2ae50a156b09981a54ae99995c"} Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.758844 5017 scope.go:117] "RemoveContainer" containerID="b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.759047 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gg6jz" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.791385 5017 scope.go:117] "RemoveContainer" containerID="815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.801926 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gg6jz"] Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.810942 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gg6jz"] Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.819124 5017 scope.go:117] "RemoveContainer" containerID="936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.884679 5017 scope.go:117] "RemoveContainer" containerID="b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915" Jan 29 08:17:59 crc kubenswrapper[5017]: E0129 08:17:59.885195 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915\": container with ID starting with b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915 not found: ID does not exist" containerID="b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.885231 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915"} err="failed to get container status \"b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915\": rpc error: code = NotFound desc = could not find container \"b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915\": container with ID starting with b9bb283ca1875421d7f65422197bc5f5e98d163f21ca061fac9e789413bc1915 not found: ID does not exist" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.885256 5017 scope.go:117] "RemoveContainer" containerID="815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf" Jan 29 08:17:59 crc kubenswrapper[5017]: E0129 08:17:59.885507 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf\": container with ID starting with 815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf not found: ID does not exist" containerID="815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.885564 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf"} err="failed to get container status \"815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf\": rpc error: code = NotFound desc = could not find container \"815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf\": container with ID starting with 815fcbf254c9e4676ff2cd12e964489a00e93dde743e40d313a91df3068b3edf not found: ID does not exist" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.885604 5017 scope.go:117] "RemoveContainer" containerID="936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a" Jan 29 08:17:59 crc kubenswrapper[5017]: E0129 08:17:59.885939 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a\": container with ID starting with 936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a not found: ID does not exist" containerID="936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a" Jan 29 08:17:59 crc kubenswrapper[5017]: I0129 08:17:59.886038 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a"} err="failed to get container status \"936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a\": rpc error: code = NotFound desc = could not find container \"936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a\": container with ID starting with 936f62e3a748aa0c58b2a01d4c69755baa15f20858f639794f3f65e301cec61a not found: ID does not exist" Jan 29 08:18:00 crc kubenswrapper[5017]: I0129 08:18:00.330262 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" path="/var/lib/kubelet/pods/6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c/volumes" Jan 29 08:18:03 crc kubenswrapper[5017]: I0129 08:18:03.054290 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tbbs6"] Jan 29 08:18:03 crc kubenswrapper[5017]: I0129 08:18:03.068824 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tbbs6"] Jan 29 08:18:04 crc kubenswrapper[5017]: I0129 08:18:04.331300 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff65104-5204-4b4f-9251-cfd45cfe7b71" path="/var/lib/kubelet/pods/1ff65104-5204-4b4f-9251-cfd45cfe7b71/volumes" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.366834 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw"] Jan 29 08:18:05 crc kubenswrapper[5017]: E0129 08:18:05.367783 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerName="extract-content" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.367798 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerName="extract-content" Jan 29 08:18:05 crc kubenswrapper[5017]: E0129 08:18:05.367828 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerName="extract-utilities" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.367834 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerName="extract-utilities" Jan 29 08:18:05 crc kubenswrapper[5017]: E0129 08:18:05.367844 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1be082-d2ef-47c9-8287-c09eb9995524" containerName="init" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.367850 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1be082-d2ef-47c9-8287-c09eb9995524" containerName="init" Jan 29 08:18:05 crc kubenswrapper[5017]: E0129 08:18:05.367876 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerName="registry-server" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.367882 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerName="registry-server" Jan 29 08:18:05 crc kubenswrapper[5017]: E0129 08:18:05.367891 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1be082-d2ef-47c9-8287-c09eb9995524" containerName="dnsmasq-dns" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.367900 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1be082-d2ef-47c9-8287-c09eb9995524" containerName="dnsmasq-dns" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.369121 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f7ca3a2-165a-4d12-b5c2-eabe09f9c43c" containerName="registry-server" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.369148 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c1be082-d2ef-47c9-8287-c09eb9995524" containerName="dnsmasq-dns" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.369991 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.372037 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.373884 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.374107 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.380944 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw"] Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.385420 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.562984 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.563161 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.563235 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8cxk\" (UniqueName: \"kubernetes.io/projected/0bba6639-7539-4a6f-b045-7cbf1679c047-kube-api-access-g8cxk\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.563348 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.563413 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.665590 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.665676 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.665827 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.665908 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.666000 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8cxk\" (UniqueName: \"kubernetes.io/projected/0bba6639-7539-4a6f-b045-7cbf1679c047-kube-api-access-g8cxk\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.677775 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.678036 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.678073 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.678223 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.682897 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8cxk\" (UniqueName: \"kubernetes.io/projected/0bba6639-7539-4a6f-b045-7cbf1679c047-kube-api-access-g8cxk\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:05 crc kubenswrapper[5017]: I0129 08:18:05.698322 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:06 crc kubenswrapper[5017]: I0129 08:18:06.278007 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw"] Jan 29 08:18:06 crc kubenswrapper[5017]: I0129 08:18:06.886493 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" event={"ID":"0bba6639-7539-4a6f-b045-7cbf1679c047","Type":"ContainerStarted","Data":"e94faa240d5074145331bcca1bddd02e51bd4278140c77fbb856f5f374e7085c"} Jan 29 08:18:15 crc kubenswrapper[5017]: I0129 08:18:15.644800 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:18:15 crc kubenswrapper[5017]: I0129 08:18:15.981600 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" event={"ID":"0bba6639-7539-4a6f-b045-7cbf1679c047","Type":"ContainerStarted","Data":"75a1f7e5d86079feb2a2f24b6b95f66846cc8028de0c19fd87b22dfd682cf188"} Jan 29 08:18:16 crc kubenswrapper[5017]: I0129 08:18:16.003492 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" podStartSLOduration=1.6466309670000001 podStartE2EDuration="11.00346963s" podCreationTimestamp="2026-01-29 08:18:05 +0000 UTC" firstStartedPulling="2026-01-29 08:18:06.284639092 +0000 UTC m=+6172.659086702" lastFinishedPulling="2026-01-29 08:18:15.641477755 +0000 UTC m=+6182.015925365" observedRunningTime="2026-01-29 08:18:15.999725839 +0000 UTC m=+6182.374173449" watchObservedRunningTime="2026-01-29 08:18:16.00346963 +0000 UTC m=+6182.377917240" Jan 29 08:18:26 crc kubenswrapper[5017]: I0129 08:18:26.539697 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:18:26 crc kubenswrapper[5017]: I0129 08:18:26.540762 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:18:26 crc kubenswrapper[5017]: I0129 08:18:26.540819 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:18:26 crc kubenswrapper[5017]: I0129 08:18:26.542040 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8a6fb445ff6fc4c22fdd66d5c1512d5020f037de980ce43bbd2b23413814ebe"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:18:26 crc kubenswrapper[5017]: I0129 08:18:26.542104 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://b8a6fb445ff6fc4c22fdd66d5c1512d5020f037de980ce43bbd2b23413814ebe" gracePeriod=600 Jan 29 08:18:27 crc kubenswrapper[5017]: I0129 08:18:27.103928 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="b8a6fb445ff6fc4c22fdd66d5c1512d5020f037de980ce43bbd2b23413814ebe" exitCode=0 Jan 29 08:18:27 crc kubenswrapper[5017]: I0129 08:18:27.103996 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"b8a6fb445ff6fc4c22fdd66d5c1512d5020f037de980ce43bbd2b23413814ebe"} Jan 29 08:18:27 crc kubenswrapper[5017]: I0129 08:18:27.104840 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46"} Jan 29 08:18:27 crc kubenswrapper[5017]: I0129 08:18:27.104878 5017 scope.go:117] "RemoveContainer" containerID="45e4c349dc67bf5910f42f2a3df0c872cf61c8acd759f4f9b895eefc7afaf645" Jan 29 08:18:29 crc kubenswrapper[5017]: I0129 08:18:29.134808 5017 generic.go:334] "Generic (PLEG): container finished" podID="0bba6639-7539-4a6f-b045-7cbf1679c047" containerID="75a1f7e5d86079feb2a2f24b6b95f66846cc8028de0c19fd87b22dfd682cf188" exitCode=0 Jan 29 08:18:29 crc kubenswrapper[5017]: I0129 08:18:29.134898 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" event={"ID":"0bba6639-7539-4a6f-b045-7cbf1679c047","Type":"ContainerDied","Data":"75a1f7e5d86079feb2a2f24b6b95f66846cc8028de0c19fd87b22dfd682cf188"} Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.637458 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.725771 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-pre-adoption-validation-combined-ca-bundle\") pod \"0bba6639-7539-4a6f-b045-7cbf1679c047\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.725865 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ssh-key-openstack-cell1\") pod \"0bba6639-7539-4a6f-b045-7cbf1679c047\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.726039 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ceph\") pod \"0bba6639-7539-4a6f-b045-7cbf1679c047\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.726178 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8cxk\" (UniqueName: \"kubernetes.io/projected/0bba6639-7539-4a6f-b045-7cbf1679c047-kube-api-access-g8cxk\") pod \"0bba6639-7539-4a6f-b045-7cbf1679c047\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.727503 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-inventory\") pod \"0bba6639-7539-4a6f-b045-7cbf1679c047\" (UID: \"0bba6639-7539-4a6f-b045-7cbf1679c047\") " Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.733669 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "0bba6639-7539-4a6f-b045-7cbf1679c047" (UID: "0bba6639-7539-4a6f-b045-7cbf1679c047"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.735623 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bba6639-7539-4a6f-b045-7cbf1679c047-kube-api-access-g8cxk" (OuterVolumeSpecName: "kube-api-access-g8cxk") pod "0bba6639-7539-4a6f-b045-7cbf1679c047" (UID: "0bba6639-7539-4a6f-b045-7cbf1679c047"). InnerVolumeSpecName "kube-api-access-g8cxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.737856 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ceph" (OuterVolumeSpecName: "ceph") pod "0bba6639-7539-4a6f-b045-7cbf1679c047" (UID: "0bba6639-7539-4a6f-b045-7cbf1679c047"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.769123 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "0bba6639-7539-4a6f-b045-7cbf1679c047" (UID: "0bba6639-7539-4a6f-b045-7cbf1679c047"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.769575 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-inventory" (OuterVolumeSpecName: "inventory") pod "0bba6639-7539-4a6f-b045-7cbf1679c047" (UID: "0bba6639-7539-4a6f-b045-7cbf1679c047"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.830991 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.831032 5017 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.831046 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.831057 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0bba6639-7539-4a6f-b045-7cbf1679c047-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:30 crc kubenswrapper[5017]: I0129 08:18:30.831065 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8cxk\" (UniqueName: \"kubernetes.io/projected/0bba6639-7539-4a6f-b045-7cbf1679c047-kube-api-access-g8cxk\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:31 crc kubenswrapper[5017]: I0129 08:18:31.155697 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" event={"ID":"0bba6639-7539-4a6f-b045-7cbf1679c047","Type":"ContainerDied","Data":"e94faa240d5074145331bcca1bddd02e51bd4278140c77fbb856f5f374e7085c"} Jan 29 08:18:31 crc kubenswrapper[5017]: I0129 08:18:31.156399 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e94faa240d5074145331bcca1bddd02e51bd4278140c77fbb856f5f374e7085c" Jan 29 08:18:31 crc kubenswrapper[5017]: I0129 08:18:31.155755 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw" Jan 29 08:18:35 crc kubenswrapper[5017]: I0129 08:18:35.505545 5017 scope.go:117] "RemoveContainer" containerID="c31fabd799b8d668223c98e2dd65324b76482fcf60a22f80ae1c538731afd27b" Jan 29 08:18:35 crc kubenswrapper[5017]: I0129 08:18:35.558294 5017 scope.go:117] "RemoveContainer" containerID="cd74e50eb115f3784d30a55d5e24398991e4db24d37dba35cb5a3e374c5bb4ef" Jan 29 08:18:35 crc kubenswrapper[5017]: I0129 08:18:35.594313 5017 scope.go:117] "RemoveContainer" containerID="d1fa96f677f5701c04d5462894cbb0ac0be22effe05a00c5f1ba27a0bbd77eec" Jan 29 08:18:35 crc kubenswrapper[5017]: I0129 08:18:35.648363 5017 scope.go:117] "RemoveContainer" containerID="28c777e7a6deb5259673a98329cccb5c4fe3fc6b317c4c64fb390fb4eed9a135" Jan 29 08:18:35 crc kubenswrapper[5017]: I0129 08:18:35.700373 5017 scope.go:117] "RemoveContainer" containerID="693c136ebab65b99dbbf4a426932894ea584ae18f9eacf43afb80bb64cae65fe" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.294529 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb"] Jan 29 08:18:38 crc kubenswrapper[5017]: E0129 08:18:38.295858 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bba6639-7539-4a6f-b045-7cbf1679c047" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.295881 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bba6639-7539-4a6f-b045-7cbf1679c047" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.296351 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bba6639-7539-4a6f-b045-7cbf1679c047" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.297586 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.301025 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.302613 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.302795 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.303049 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.316488 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb"] Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.347468 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.347559 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.348602 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.348723 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.348856 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qng4n\" (UniqueName: \"kubernetes.io/projected/1c67db27-194c-43dd-ab29-0461e44ba417-kube-api-access-qng4n\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.451109 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.451182 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.451232 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qng4n\" (UniqueName: \"kubernetes.io/projected/1c67db27-194c-43dd-ab29-0461e44ba417-kube-api-access-qng4n\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.451284 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.451308 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.458824 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.459045 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.459496 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.462790 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.485044 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qng4n\" (UniqueName: \"kubernetes.io/projected/1c67db27-194c-43dd-ab29-0461e44ba417-kube-api-access-qng4n\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:38 crc kubenswrapper[5017]: I0129 08:18:38.628640 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:18:39 crc kubenswrapper[5017]: I0129 08:18:39.210165 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb"] Jan 29 08:18:39 crc kubenswrapper[5017]: I0129 08:18:39.275733 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" event={"ID":"1c67db27-194c-43dd-ab29-0461e44ba417","Type":"ContainerStarted","Data":"f4c956b4ae433e1bba64c9560ac03d8ffa8aa8043b790e6b8b933923faf8ccac"} Jan 29 08:18:40 crc kubenswrapper[5017]: I0129 08:18:40.287878 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" event={"ID":"1c67db27-194c-43dd-ab29-0461e44ba417","Type":"ContainerStarted","Data":"06650dd66c5480d487412e48f2909ed03d8b28b3315ededdecf387d02f058735"} Jan 29 08:18:40 crc kubenswrapper[5017]: I0129 08:18:40.327884 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" podStartSLOduration=1.893843519 podStartE2EDuration="2.327851183s" podCreationTimestamp="2026-01-29 08:18:38 +0000 UTC" firstStartedPulling="2026-01-29 08:18:39.219646492 +0000 UTC m=+6205.594094102" lastFinishedPulling="2026-01-29 08:18:39.653654156 +0000 UTC m=+6206.028101766" observedRunningTime="2026-01-29 08:18:40.313123578 +0000 UTC m=+6206.687571198" watchObservedRunningTime="2026-01-29 08:18:40.327851183 +0000 UTC m=+6206.702298793" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.399720 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tdnmw"] Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.403503 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.427400 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdnmw"] Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.497366 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-catalog-content\") pod \"certified-operators-tdnmw\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.497431 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpcvf\" (UniqueName: \"kubernetes.io/projected/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-kube-api-access-vpcvf\") pod \"certified-operators-tdnmw\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.497503 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-utilities\") pod \"certified-operators-tdnmw\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.600136 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-catalog-content\") pod \"certified-operators-tdnmw\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.600246 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpcvf\" (UniqueName: \"kubernetes.io/projected/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-kube-api-access-vpcvf\") pod \"certified-operators-tdnmw\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.600286 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-utilities\") pod \"certified-operators-tdnmw\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.600997 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-utilities\") pod \"certified-operators-tdnmw\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.600935 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-catalog-content\") pod \"certified-operators-tdnmw\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.634317 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpcvf\" (UniqueName: \"kubernetes.io/projected/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-kube-api-access-vpcvf\") pod \"certified-operators-tdnmw\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:47 crc kubenswrapper[5017]: I0129 08:18:47.727066 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:48 crc kubenswrapper[5017]: I0129 08:18:48.273927 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tdnmw"] Jan 29 08:18:48 crc kubenswrapper[5017]: W0129 08:18:48.277822 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06bfa0d1_0c55_4150_9eb5_38a9fbf57ae2.slice/crio-102614d5ac0a2ae14874e5e750491eddc40a91962e4aeb313d978e2115ef12dc WatchSource:0}: Error finding container 102614d5ac0a2ae14874e5e750491eddc40a91962e4aeb313d978e2115ef12dc: Status 404 returned error can't find the container with id 102614d5ac0a2ae14874e5e750491eddc40a91962e4aeb313d978e2115ef12dc Jan 29 08:18:48 crc kubenswrapper[5017]: I0129 08:18:48.384301 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnmw" event={"ID":"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2","Type":"ContainerStarted","Data":"102614d5ac0a2ae14874e5e750491eddc40a91962e4aeb313d978e2115ef12dc"} Jan 29 08:18:49 crc kubenswrapper[5017]: I0129 08:18:49.396305 5017 generic.go:334] "Generic (PLEG): container finished" podID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerID="77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225" exitCode=0 Jan 29 08:18:49 crc kubenswrapper[5017]: I0129 08:18:49.396413 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnmw" event={"ID":"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2","Type":"ContainerDied","Data":"77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225"} Jan 29 08:18:50 crc kubenswrapper[5017]: I0129 08:18:50.424004 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnmw" event={"ID":"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2","Type":"ContainerStarted","Data":"48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8"} Jan 29 08:18:52 crc kubenswrapper[5017]: I0129 08:18:52.443810 5017 generic.go:334] "Generic (PLEG): container finished" podID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerID="48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8" exitCode=0 Jan 29 08:18:52 crc kubenswrapper[5017]: I0129 08:18:52.443921 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnmw" event={"ID":"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2","Type":"ContainerDied","Data":"48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8"} Jan 29 08:18:53 crc kubenswrapper[5017]: I0129 08:18:53.472432 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnmw" event={"ID":"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2","Type":"ContainerStarted","Data":"e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167"} Jan 29 08:18:53 crc kubenswrapper[5017]: I0129 08:18:53.503242 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tdnmw" podStartSLOduration=3.034635843 podStartE2EDuration="6.503209218s" podCreationTimestamp="2026-01-29 08:18:47 +0000 UTC" firstStartedPulling="2026-01-29 08:18:49.398738811 +0000 UTC m=+6215.773186421" lastFinishedPulling="2026-01-29 08:18:52.867312196 +0000 UTC m=+6219.241759796" observedRunningTime="2026-01-29 08:18:53.496282701 +0000 UTC m=+6219.870730311" watchObservedRunningTime="2026-01-29 08:18:53.503209218 +0000 UTC m=+6219.877656828" Jan 29 08:18:57 crc kubenswrapper[5017]: I0129 08:18:57.728174 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:57 crc kubenswrapper[5017]: I0129 08:18:57.728985 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:57 crc kubenswrapper[5017]: I0129 08:18:57.796324 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:58 crc kubenswrapper[5017]: I0129 08:18:58.584500 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:18:58 crc kubenswrapper[5017]: I0129 08:18:58.664065 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tdnmw"] Jan 29 08:19:00 crc kubenswrapper[5017]: I0129 08:19:00.544423 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tdnmw" podUID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerName="registry-server" containerID="cri-o://e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167" gracePeriod=2 Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.041092 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.233328 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-catalog-content\") pod \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.233519 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-utilities\") pod \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.234863 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-utilities" (OuterVolumeSpecName: "utilities") pod "06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" (UID: "06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.235580 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpcvf\" (UniqueName: \"kubernetes.io/projected/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-kube-api-access-vpcvf\") pod \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\" (UID: \"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2\") " Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.236709 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.245392 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-kube-api-access-vpcvf" (OuterVolumeSpecName: "kube-api-access-vpcvf") pod "06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" (UID: "06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2"). InnerVolumeSpecName "kube-api-access-vpcvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.286825 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" (UID: "06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.337816 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpcvf\" (UniqueName: \"kubernetes.io/projected/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-kube-api-access-vpcvf\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.338498 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.557369 5017 generic.go:334] "Generic (PLEG): container finished" podID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerID="e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167" exitCode=0 Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.557429 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnmw" event={"ID":"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2","Type":"ContainerDied","Data":"e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167"} Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.557474 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tdnmw" event={"ID":"06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2","Type":"ContainerDied","Data":"102614d5ac0a2ae14874e5e750491eddc40a91962e4aeb313d978e2115ef12dc"} Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.557497 5017 scope.go:117] "RemoveContainer" containerID="e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.558062 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tdnmw" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.592088 5017 scope.go:117] "RemoveContainer" containerID="48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.602370 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tdnmw"] Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.614837 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tdnmw"] Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.615669 5017 scope.go:117] "RemoveContainer" containerID="77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.667877 5017 scope.go:117] "RemoveContainer" containerID="e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167" Jan 29 08:19:01 crc kubenswrapper[5017]: E0129 08:19:01.678403 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167\": container with ID starting with e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167 not found: ID does not exist" containerID="e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.678459 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167"} err="failed to get container status \"e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167\": rpc error: code = NotFound desc = could not find container \"e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167\": container with ID starting with e4fb89f1cfb5b412b2fab90ad9a9a631da47688803fb295e4e47202a0a254167 not found: ID does not exist" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.678498 5017 scope.go:117] "RemoveContainer" containerID="48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8" Jan 29 08:19:01 crc kubenswrapper[5017]: E0129 08:19:01.679420 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8\": container with ID starting with 48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8 not found: ID does not exist" containerID="48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.679447 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8"} err="failed to get container status \"48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8\": rpc error: code = NotFound desc = could not find container \"48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8\": container with ID starting with 48cf5250f9d34d3034508f120c14efcb7044b4dec2f4cb40c4b129f54432c9c8 not found: ID does not exist" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.679465 5017 scope.go:117] "RemoveContainer" containerID="77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225" Jan 29 08:19:01 crc kubenswrapper[5017]: E0129 08:19:01.679879 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225\": container with ID starting with 77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225 not found: ID does not exist" containerID="77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225" Jan 29 08:19:01 crc kubenswrapper[5017]: I0129 08:19:01.679934 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225"} err="failed to get container status \"77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225\": rpc error: code = NotFound desc = could not find container \"77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225\": container with ID starting with 77a6e945d0abf97c6a87fb82d0a81822acffe0d0074d27043c756080704f2225 not found: ID does not exist" Jan 29 08:19:02 crc kubenswrapper[5017]: I0129 08:19:02.508044 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" path="/var/lib/kubelet/pods/06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2/volumes" Jan 29 08:20:26 crc kubenswrapper[5017]: I0129 08:20:26.542572 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:20:26 crc kubenswrapper[5017]: I0129 08:20:26.543693 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:20:55 crc kubenswrapper[5017]: I0129 08:20:55.042326 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-prg9k"] Jan 29 08:20:55 crc kubenswrapper[5017]: I0129 08:20:55.053567 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-prg9k"] Jan 29 08:20:56 crc kubenswrapper[5017]: I0129 08:20:56.025779 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-7614-account-create-update-7jpwv"] Jan 29 08:20:56 crc kubenswrapper[5017]: I0129 08:20:56.034118 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-7614-account-create-update-7jpwv"] Jan 29 08:20:56 crc kubenswrapper[5017]: I0129 08:20:56.328901 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ddccbfd-933b-453c-9c4c-091c2404f994" path="/var/lib/kubelet/pods/6ddccbfd-933b-453c-9c4c-091c2404f994/volumes" Jan 29 08:20:56 crc kubenswrapper[5017]: I0129 08:20:56.329748 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0f87d4-8e6f-4b13-a018-66f3317394b1" path="/var/lib/kubelet/pods/ac0f87d4-8e6f-4b13-a018-66f3317394b1/volumes" Jan 29 08:20:56 crc kubenswrapper[5017]: I0129 08:20:56.539506 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:20:56 crc kubenswrapper[5017]: I0129 08:20:56.539590 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:21:01 crc kubenswrapper[5017]: I0129 08:21:01.039362 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-hztpq"] Jan 29 08:21:01 crc kubenswrapper[5017]: I0129 08:21:01.050682 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-hztpq"] Jan 29 08:21:02 crc kubenswrapper[5017]: I0129 08:21:02.332177 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e53154-f239-4c66-a96b-0c32a3304e57" path="/var/lib/kubelet/pods/58e53154-f239-4c66-a96b-0c32a3304e57/volumes" Jan 29 08:21:03 crc kubenswrapper[5017]: I0129 08:21:03.036822 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-59ae-account-create-update-hwgd7"] Jan 29 08:21:03 crc kubenswrapper[5017]: I0129 08:21:03.046737 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-59ae-account-create-update-hwgd7"] Jan 29 08:21:04 crc kubenswrapper[5017]: I0129 08:21:04.332461 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664cf5ff-9de5-45be-8778-fa2ac737b9a8" path="/var/lib/kubelet/pods/664cf5ff-9de5-45be-8778-fa2ac737b9a8/volumes" Jan 29 08:21:26 crc kubenswrapper[5017]: I0129 08:21:26.539125 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:21:26 crc kubenswrapper[5017]: I0129 08:21:26.540066 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:21:26 crc kubenswrapper[5017]: I0129 08:21:26.540129 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:21:26 crc kubenswrapper[5017]: I0129 08:21:26.541428 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:21:26 crc kubenswrapper[5017]: I0129 08:21:26.541480 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" gracePeriod=600 Jan 29 08:21:26 crc kubenswrapper[5017]: E0129 08:21:26.670537 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:21:27 crc kubenswrapper[5017]: I0129 08:21:27.191340 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" exitCode=0 Jan 29 08:21:27 crc kubenswrapper[5017]: I0129 08:21:27.191423 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46"} Jan 29 08:21:27 crc kubenswrapper[5017]: I0129 08:21:27.191525 5017 scope.go:117] "RemoveContainer" containerID="b8a6fb445ff6fc4c22fdd66d5c1512d5020f037de980ce43bbd2b23413814ebe" Jan 29 08:21:27 crc kubenswrapper[5017]: I0129 08:21:27.192419 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:21:27 crc kubenswrapper[5017]: E0129 08:21:27.192841 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:21:35 crc kubenswrapper[5017]: I0129 08:21:35.967102 5017 scope.go:117] "RemoveContainer" containerID="0284053e0bcbf9c733af4caf2f2bcefcef7b91bfe8689ea43a82bb38362bd296" Jan 29 08:21:36 crc kubenswrapper[5017]: I0129 08:21:36.008591 5017 scope.go:117] "RemoveContainer" containerID="c4d2e5b9e1727f5c04e990b568b63b31e5b187b33b553df593fd911de1babc42" Jan 29 08:21:36 crc kubenswrapper[5017]: I0129 08:21:36.059180 5017 scope.go:117] "RemoveContainer" containerID="abb35b3863af14179d46b4fe45c3d612a5f0a84e52da36ccad47176957057857" Jan 29 08:21:36 crc kubenswrapper[5017]: I0129 08:21:36.121889 5017 scope.go:117] "RemoveContainer" containerID="208ab7815c4c5ec09b9043332779e1feeb5d9179bb36663f6afdbc2801f3a1b5" Jan 29 08:21:38 crc kubenswrapper[5017]: I0129 08:21:38.043691 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-zk2s6"] Jan 29 08:21:38 crc kubenswrapper[5017]: I0129 08:21:38.055063 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-zk2s6"] Jan 29 08:21:38 crc kubenswrapper[5017]: I0129 08:21:38.328113 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4abeeca4-8ab6-41ad-9aeb-9c00d087db86" path="/var/lib/kubelet/pods/4abeeca4-8ab6-41ad-9aeb-9c00d087db86/volumes" Jan 29 08:21:41 crc kubenswrapper[5017]: I0129 08:21:41.316312 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:21:41 crc kubenswrapper[5017]: E0129 08:21:41.317242 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:21:53 crc kubenswrapper[5017]: I0129 08:21:53.317049 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:21:53 crc kubenswrapper[5017]: E0129 08:21:53.318409 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:22:06 crc kubenswrapper[5017]: I0129 08:22:06.317365 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:22:06 crc kubenswrapper[5017]: E0129 08:22:06.318250 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:22:18 crc kubenswrapper[5017]: I0129 08:22:18.316912 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:22:18 crc kubenswrapper[5017]: E0129 08:22:18.318568 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:22:33 crc kubenswrapper[5017]: I0129 08:22:33.316800 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:22:33 crc kubenswrapper[5017]: E0129 08:22:33.318022 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:22:36 crc kubenswrapper[5017]: I0129 08:22:36.261770 5017 scope.go:117] "RemoveContainer" containerID="a21279174b1094ccf35eb8f890251820e04277d63e591bc68d15fefc0bdffe74" Jan 29 08:22:36 crc kubenswrapper[5017]: I0129 08:22:36.291044 5017 scope.go:117] "RemoveContainer" containerID="26760db5256f1972aa7a76449a9b3d494acc63a7cbd9fabb56d6c933f52d009e" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.275444 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mnps5"] Jan 29 08:22:37 crc kubenswrapper[5017]: E0129 08:22:37.277388 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerName="extract-utilities" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.277420 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerName="extract-utilities" Jan 29 08:22:37 crc kubenswrapper[5017]: E0129 08:22:37.277457 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerName="registry-server" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.277468 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerName="registry-server" Jan 29 08:22:37 crc kubenswrapper[5017]: E0129 08:22:37.277492 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerName="extract-content" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.277500 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerName="extract-content" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.277801 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="06bfa0d1-0c55-4150-9eb5-38a9fbf57ae2" containerName="registry-server" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.279928 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.290575 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnps5"] Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.346806 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgqg\" (UniqueName: \"kubernetes.io/projected/51350b53-3655-496a-a862-ebbc165c92e4-kube-api-access-bpgqg\") pod \"redhat-marketplace-mnps5\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.346928 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-catalog-content\") pod \"redhat-marketplace-mnps5\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.347144 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-utilities\") pod \"redhat-marketplace-mnps5\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.449337 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpgqg\" (UniqueName: \"kubernetes.io/projected/51350b53-3655-496a-a862-ebbc165c92e4-kube-api-access-bpgqg\") pod \"redhat-marketplace-mnps5\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.449471 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-catalog-content\") pod \"redhat-marketplace-mnps5\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.449551 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-utilities\") pod \"redhat-marketplace-mnps5\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.450281 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-catalog-content\") pod \"redhat-marketplace-mnps5\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.451315 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-utilities\") pod \"redhat-marketplace-mnps5\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.470164 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpgqg\" (UniqueName: \"kubernetes.io/projected/51350b53-3655-496a-a862-ebbc165c92e4-kube-api-access-bpgqg\") pod \"redhat-marketplace-mnps5\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:37 crc kubenswrapper[5017]: I0129 08:22:37.599399 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:38 crc kubenswrapper[5017]: I0129 08:22:38.102534 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnps5"] Jan 29 08:22:38 crc kubenswrapper[5017]: I0129 08:22:38.951180 5017 generic.go:334] "Generic (PLEG): container finished" podID="51350b53-3655-496a-a862-ebbc165c92e4" containerID="4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf" exitCode=0 Jan 29 08:22:38 crc kubenswrapper[5017]: I0129 08:22:38.951273 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnps5" event={"ID":"51350b53-3655-496a-a862-ebbc165c92e4","Type":"ContainerDied","Data":"4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf"} Jan 29 08:22:38 crc kubenswrapper[5017]: I0129 08:22:38.952053 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnps5" event={"ID":"51350b53-3655-496a-a862-ebbc165c92e4","Type":"ContainerStarted","Data":"054f620eb39824431a329a23cc178659a5a376e3082ff77a7390b348bbd69667"} Jan 29 08:22:38 crc kubenswrapper[5017]: I0129 08:22:38.954140 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:22:39 crc kubenswrapper[5017]: I0129 08:22:39.965170 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnps5" event={"ID":"51350b53-3655-496a-a862-ebbc165c92e4","Type":"ContainerStarted","Data":"c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c"} Jan 29 08:22:40 crc kubenswrapper[5017]: I0129 08:22:40.977452 5017 generic.go:334] "Generic (PLEG): container finished" podID="51350b53-3655-496a-a862-ebbc165c92e4" containerID="c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c" exitCode=0 Jan 29 08:22:40 crc kubenswrapper[5017]: I0129 08:22:40.977556 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnps5" event={"ID":"51350b53-3655-496a-a862-ebbc165c92e4","Type":"ContainerDied","Data":"c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c"} Jan 29 08:22:41 crc kubenswrapper[5017]: I0129 08:22:41.991453 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnps5" event={"ID":"51350b53-3655-496a-a862-ebbc165c92e4","Type":"ContainerStarted","Data":"a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9"} Jan 29 08:22:47 crc kubenswrapper[5017]: I0129 08:22:47.599732 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:47 crc kubenswrapper[5017]: I0129 08:22:47.600739 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:47 crc kubenswrapper[5017]: I0129 08:22:47.651468 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:47 crc kubenswrapper[5017]: I0129 08:22:47.680147 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mnps5" podStartSLOduration=8.264582883 podStartE2EDuration="10.680112208s" podCreationTimestamp="2026-01-29 08:22:37 +0000 UTC" firstStartedPulling="2026-01-29 08:22:38.953928249 +0000 UTC m=+6445.328375859" lastFinishedPulling="2026-01-29 08:22:41.369457574 +0000 UTC m=+6447.743905184" observedRunningTime="2026-01-29 08:22:42.009132387 +0000 UTC m=+6448.383579997" watchObservedRunningTime="2026-01-29 08:22:47.680112208 +0000 UTC m=+6454.054559818" Jan 29 08:22:48 crc kubenswrapper[5017]: I0129 08:22:48.130587 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:48 crc kubenswrapper[5017]: I0129 08:22:48.233573 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnps5"] Jan 29 08:22:48 crc kubenswrapper[5017]: I0129 08:22:48.317243 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:22:48 crc kubenswrapper[5017]: E0129 08:22:48.317591 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.075899 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mnps5" podUID="51350b53-3655-496a-a862-ebbc165c92e4" containerName="registry-server" containerID="cri-o://a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9" gracePeriod=2 Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.565827 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.686930 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpgqg\" (UniqueName: \"kubernetes.io/projected/51350b53-3655-496a-a862-ebbc165c92e4-kube-api-access-bpgqg\") pod \"51350b53-3655-496a-a862-ebbc165c92e4\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.687175 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-catalog-content\") pod \"51350b53-3655-496a-a862-ebbc165c92e4\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.687308 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-utilities\") pod \"51350b53-3655-496a-a862-ebbc165c92e4\" (UID: \"51350b53-3655-496a-a862-ebbc165c92e4\") " Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.688511 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-utilities" (OuterVolumeSpecName: "utilities") pod "51350b53-3655-496a-a862-ebbc165c92e4" (UID: "51350b53-3655-496a-a862-ebbc165c92e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.694893 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51350b53-3655-496a-a862-ebbc165c92e4-kube-api-access-bpgqg" (OuterVolumeSpecName: "kube-api-access-bpgqg") pod "51350b53-3655-496a-a862-ebbc165c92e4" (UID: "51350b53-3655-496a-a862-ebbc165c92e4"). InnerVolumeSpecName "kube-api-access-bpgqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.710964 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51350b53-3655-496a-a862-ebbc165c92e4" (UID: "51350b53-3655-496a-a862-ebbc165c92e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.790191 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpgqg\" (UniqueName: \"kubernetes.io/projected/51350b53-3655-496a-a862-ebbc165c92e4-kube-api-access-bpgqg\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.790225 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:50 crc kubenswrapper[5017]: I0129 08:22:50.790234 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51350b53-3655-496a-a862-ebbc165c92e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.090006 5017 generic.go:334] "Generic (PLEG): container finished" podID="51350b53-3655-496a-a862-ebbc165c92e4" containerID="a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9" exitCode=0 Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.090066 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnps5" event={"ID":"51350b53-3655-496a-a862-ebbc165c92e4","Type":"ContainerDied","Data":"a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9"} Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.090083 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnps5" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.090106 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnps5" event={"ID":"51350b53-3655-496a-a862-ebbc165c92e4","Type":"ContainerDied","Data":"054f620eb39824431a329a23cc178659a5a376e3082ff77a7390b348bbd69667"} Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.090127 5017 scope.go:117] "RemoveContainer" containerID="a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.126664 5017 scope.go:117] "RemoveContainer" containerID="c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.130205 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnps5"] Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.139842 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnps5"] Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.153571 5017 scope.go:117] "RemoveContainer" containerID="4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.211450 5017 scope.go:117] "RemoveContainer" containerID="a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9" Jan 29 08:22:51 crc kubenswrapper[5017]: E0129 08:22:51.212035 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9\": container with ID starting with a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9 not found: ID does not exist" containerID="a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.212097 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9"} err="failed to get container status \"a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9\": rpc error: code = NotFound desc = could not find container \"a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9\": container with ID starting with a2f13f2fd7a21643791c6d2aea909bb6b6bafadbeb214c634751aa708a546bf9 not found: ID does not exist" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.212138 5017 scope.go:117] "RemoveContainer" containerID="c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c" Jan 29 08:22:51 crc kubenswrapper[5017]: E0129 08:22:51.212719 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c\": container with ID starting with c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c not found: ID does not exist" containerID="c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.212756 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c"} err="failed to get container status \"c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c\": rpc error: code = NotFound desc = could not find container \"c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c\": container with ID starting with c75a535cb0692f959f135490ac7d1e079f228c13dda6f2042f3a1ddfd902f69c not found: ID does not exist" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.212778 5017 scope.go:117] "RemoveContainer" containerID="4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf" Jan 29 08:22:51 crc kubenswrapper[5017]: E0129 08:22:51.213058 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf\": container with ID starting with 4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf not found: ID does not exist" containerID="4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf" Jan 29 08:22:51 crc kubenswrapper[5017]: I0129 08:22:51.213080 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf"} err="failed to get container status \"4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf\": rpc error: code = NotFound desc = could not find container \"4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf\": container with ID starting with 4546123ed1a30d5f539178e54983ca497d5a843598cde9a541bdce834dbaa7bf not found: ID does not exist" Jan 29 08:22:52 crc kubenswrapper[5017]: I0129 08:22:52.331572 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51350b53-3655-496a-a862-ebbc165c92e4" path="/var/lib/kubelet/pods/51350b53-3655-496a-a862-ebbc165c92e4/volumes" Jan 29 08:23:00 crc kubenswrapper[5017]: I0129 08:23:00.316770 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:23:00 crc kubenswrapper[5017]: E0129 08:23:00.318005 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:23:11 crc kubenswrapper[5017]: I0129 08:23:11.316291 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:23:11 crc kubenswrapper[5017]: E0129 08:23:11.317269 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:23:12 crc kubenswrapper[5017]: I0129 08:23:12.047603 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-a5b8-account-create-update-lp9xd"] Jan 29 08:23:12 crc kubenswrapper[5017]: I0129 08:23:12.058656 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-dxdg2"] Jan 29 08:23:12 crc kubenswrapper[5017]: I0129 08:23:12.067768 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-dxdg2"] Jan 29 08:23:12 crc kubenswrapper[5017]: I0129 08:23:12.076552 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-a5b8-account-create-update-lp9xd"] Jan 29 08:23:12 crc kubenswrapper[5017]: I0129 08:23:12.333997 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e912e972-f106-4132-b64c-ef779807fe93" path="/var/lib/kubelet/pods/e912e972-f106-4132-b64c-ef779807fe93/volumes" Jan 29 08:23:12 crc kubenswrapper[5017]: I0129 08:23:12.334796 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5" path="/var/lib/kubelet/pods/eb1c4bc9-2a12-45c6-9006-a7c1fef92eb5/volumes" Jan 29 08:23:25 crc kubenswrapper[5017]: I0129 08:23:25.317734 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:23:25 crc kubenswrapper[5017]: E0129 08:23:25.318707 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:23:27 crc kubenswrapper[5017]: I0129 08:23:27.070851 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jh74k"] Jan 29 08:23:27 crc kubenswrapper[5017]: I0129 08:23:27.081426 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jh74k"] Jan 29 08:23:28 crc kubenswrapper[5017]: I0129 08:23:28.332594 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0efccca-8bbf-4612-8c12-1508bdb868cc" path="/var/lib/kubelet/pods/d0efccca-8bbf-4612-8c12-1508bdb868cc/volumes" Jan 29 08:23:36 crc kubenswrapper[5017]: I0129 08:23:36.415091 5017 scope.go:117] "RemoveContainer" containerID="7457ea8d56578ac81359989230e730e6ac34800d0fc802aed6aad64548b636fd" Jan 29 08:23:36 crc kubenswrapper[5017]: I0129 08:23:36.456547 5017 scope.go:117] "RemoveContainer" containerID="4d18d43381e36b6613d4c6d3faa3255024bad0401fc40ea0052af544b28ea5f9" Jan 29 08:23:36 crc kubenswrapper[5017]: I0129 08:23:36.531107 5017 scope.go:117] "RemoveContainer" containerID="f54af1d9f040a524032cf7e43f1f957baad75b9647a4d38733efafc121d209b8" Jan 29 08:23:37 crc kubenswrapper[5017]: I0129 08:23:37.316919 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:23:37 crc kubenswrapper[5017]: E0129 08:23:37.317710 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:23:51 crc kubenswrapper[5017]: I0129 08:23:51.316766 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:23:51 crc kubenswrapper[5017]: E0129 08:23:51.317791 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:24:05 crc kubenswrapper[5017]: I0129 08:24:05.316715 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:24:05 crc kubenswrapper[5017]: E0129 08:24:05.319945 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:24:17 crc kubenswrapper[5017]: I0129 08:24:17.316066 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:24:17 crc kubenswrapper[5017]: E0129 08:24:17.316802 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:24:32 crc kubenswrapper[5017]: I0129 08:24:32.316517 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:24:32 crc kubenswrapper[5017]: E0129 08:24:32.317684 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:24:43 crc kubenswrapper[5017]: I0129 08:24:43.316546 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:24:43 crc kubenswrapper[5017]: E0129 08:24:43.317723 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:24:54 crc kubenswrapper[5017]: I0129 08:24:54.330400 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:24:54 crc kubenswrapper[5017]: E0129 08:24:54.333068 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:25:07 crc kubenswrapper[5017]: I0129 08:25:07.319068 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:25:07 crc kubenswrapper[5017]: E0129 08:25:07.320369 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:25:19 crc kubenswrapper[5017]: I0129 08:25:19.318304 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:25:19 crc kubenswrapper[5017]: E0129 08:25:19.319933 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:25:30 crc kubenswrapper[5017]: I0129 08:25:30.316908 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:25:30 crc kubenswrapper[5017]: E0129 08:25:30.318370 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:25:37 crc kubenswrapper[5017]: I0129 08:25:37.049289 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-5f20-account-create-update-967mb"] Jan 29 08:25:37 crc kubenswrapper[5017]: I0129 08:25:37.060314 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-nb7cn"] Jan 29 08:25:37 crc kubenswrapper[5017]: I0129 08:25:37.072439 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-5f20-account-create-update-967mb"] Jan 29 08:25:37 crc kubenswrapper[5017]: I0129 08:25:37.081611 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-nb7cn"] Jan 29 08:25:38 crc kubenswrapper[5017]: I0129 08:25:38.333697 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13184a95-812b-4284-af60-4bd58429a08a" path="/var/lib/kubelet/pods/13184a95-812b-4284-af60-4bd58429a08a/volumes" Jan 29 08:25:38 crc kubenswrapper[5017]: I0129 08:25:38.334931 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4113968e-d27d-4a51-841d-7721ffb477ad" path="/var/lib/kubelet/pods/4113968e-d27d-4a51-841d-7721ffb477ad/volumes" Jan 29 08:25:45 crc kubenswrapper[5017]: I0129 08:25:45.317098 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:25:45 crc kubenswrapper[5017]: E0129 08:25:45.319576 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:25:47 crc kubenswrapper[5017]: I0129 08:25:47.031337 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-b4hhk"] Jan 29 08:25:47 crc kubenswrapper[5017]: I0129 08:25:47.043294 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-b4hhk"] Jan 29 08:25:48 crc kubenswrapper[5017]: I0129 08:25:48.350106 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf64798-e554-4c94-b5f1-2ee6a88852f4" path="/var/lib/kubelet/pods/eaf64798-e554-4c94-b5f1-2ee6a88852f4/volumes" Jan 29 08:25:57 crc kubenswrapper[5017]: I0129 08:25:57.316701 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:25:57 crc kubenswrapper[5017]: E0129 08:25:57.317975 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:26:09 crc kubenswrapper[5017]: I0129 08:26:09.316309 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:26:09 crc kubenswrapper[5017]: E0129 08:26:09.317416 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:26:11 crc kubenswrapper[5017]: I0129 08:26:11.048716 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-24d0-account-create-update-2tprl"] Jan 29 08:26:11 crc kubenswrapper[5017]: I0129 08:26:11.064272 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-flh7f"] Jan 29 08:26:11 crc kubenswrapper[5017]: I0129 08:26:11.074310 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-24d0-account-create-update-2tprl"] Jan 29 08:26:11 crc kubenswrapper[5017]: I0129 08:26:11.084033 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-flh7f"] Jan 29 08:26:12 crc kubenswrapper[5017]: I0129 08:26:12.368044 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429ded21-89c8-40cf-b233-90403c09606f" path="/var/lib/kubelet/pods/429ded21-89c8-40cf-b233-90403c09606f/volumes" Jan 29 08:26:12 crc kubenswrapper[5017]: I0129 08:26:12.369542 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2" path="/var/lib/kubelet/pods/b6d4fe95-8d0c-4b4f-84ea-b2b1338f5bf2/volumes" Jan 29 08:26:24 crc kubenswrapper[5017]: I0129 08:26:24.035835 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-jkbkt"] Jan 29 08:26:24 crc kubenswrapper[5017]: I0129 08:26:24.048047 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-jkbkt"] Jan 29 08:26:24 crc kubenswrapper[5017]: I0129 08:26:24.323671 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:26:24 crc kubenswrapper[5017]: E0129 08:26:24.324130 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:26:24 crc kubenswrapper[5017]: I0129 08:26:24.330582 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb5a3e4-977b-403a-962e-ab8e0178dca1" path="/var/lib/kubelet/pods/8eb5a3e4-977b-403a-962e-ab8e0178dca1/volumes" Jan 29 08:26:36 crc kubenswrapper[5017]: I0129 08:26:36.705291 5017 scope.go:117] "RemoveContainer" containerID="4bee9d54ae6f75e0c11ab77de6406971c610a4dd31a2b630a2cff754b42fc773" Jan 29 08:26:36 crc kubenswrapper[5017]: I0129 08:26:36.742738 5017 scope.go:117] "RemoveContainer" containerID="125368aa90d5fb8e2cc2937041e9d92ce6737cffdc1e24011a472523bf80fc9b" Jan 29 08:26:36 crc kubenswrapper[5017]: I0129 08:26:36.802760 5017 scope.go:117] "RemoveContainer" containerID="6ff2ca958022faa8333093925ca0c47c007210b4c567512b1614211a6292cdb7" Jan 29 08:26:36 crc kubenswrapper[5017]: I0129 08:26:36.835113 5017 scope.go:117] "RemoveContainer" containerID="c0f7ba883627784e07548bb496bc6ecfb460337d088844aa82b0d76757539040" Jan 29 08:26:36 crc kubenswrapper[5017]: I0129 08:26:36.881216 5017 scope.go:117] "RemoveContainer" containerID="29f6fd19ddfccd3db9376b909ba749e4d4c301d50bfe9d13daaa4ab181411f86" Jan 29 08:26:36 crc kubenswrapper[5017]: I0129 08:26:36.934355 5017 scope.go:117] "RemoveContainer" containerID="cdc4141ac3a037eb465be46f0647da13cbb1802b07676fbf6247cc3b19a4fd5b" Jan 29 08:26:37 crc kubenswrapper[5017]: I0129 08:26:37.317418 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:26:37 crc kubenswrapper[5017]: I0129 08:26:37.504707 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"d088b7157d2d30237d05e92bd39f814bd550db374b4ca0b407df76504036dcce"} Jan 29 08:26:53 crc kubenswrapper[5017]: I0129 08:26:53.994701 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hwm6f"] Jan 29 08:26:53 crc kubenswrapper[5017]: E0129 08:26:53.997143 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51350b53-3655-496a-a862-ebbc165c92e4" containerName="extract-utilities" Jan 29 08:26:53 crc kubenswrapper[5017]: I0129 08:26:53.997169 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="51350b53-3655-496a-a862-ebbc165c92e4" containerName="extract-utilities" Jan 29 08:26:53 crc kubenswrapper[5017]: E0129 08:26:53.997202 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51350b53-3655-496a-a862-ebbc165c92e4" containerName="registry-server" Jan 29 08:26:53 crc kubenswrapper[5017]: I0129 08:26:53.997209 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="51350b53-3655-496a-a862-ebbc165c92e4" containerName="registry-server" Jan 29 08:26:53 crc kubenswrapper[5017]: E0129 08:26:53.997220 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51350b53-3655-496a-a862-ebbc165c92e4" containerName="extract-content" Jan 29 08:26:53 crc kubenswrapper[5017]: I0129 08:26:53.997226 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="51350b53-3655-496a-a862-ebbc165c92e4" containerName="extract-content" Jan 29 08:26:53 crc kubenswrapper[5017]: I0129 08:26:53.997456 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="51350b53-3655-496a-a862-ebbc165c92e4" containerName="registry-server" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.005320 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.051253 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwm6f"] Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.088991 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-catalog-content\") pod \"redhat-operators-hwm6f\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.089113 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx4ps\" (UniqueName: \"kubernetes.io/projected/7cdee7d3-6192-40f7-a787-8c5242f4975d-kube-api-access-cx4ps\") pod \"redhat-operators-hwm6f\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.089306 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-utilities\") pod \"redhat-operators-hwm6f\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.191708 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-utilities\") pod \"redhat-operators-hwm6f\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.191828 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-catalog-content\") pod \"redhat-operators-hwm6f\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.191881 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4ps\" (UniqueName: \"kubernetes.io/projected/7cdee7d3-6192-40f7-a787-8c5242f4975d-kube-api-access-cx4ps\") pod \"redhat-operators-hwm6f\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.192376 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-utilities\") pod \"redhat-operators-hwm6f\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.192498 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-catalog-content\") pod \"redhat-operators-hwm6f\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.215114 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4ps\" (UniqueName: \"kubernetes.io/projected/7cdee7d3-6192-40f7-a787-8c5242f4975d-kube-api-access-cx4ps\") pod \"redhat-operators-hwm6f\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.362472 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:26:54 crc kubenswrapper[5017]: I0129 08:26:54.880565 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwm6f"] Jan 29 08:26:54 crc kubenswrapper[5017]: W0129 08:26:54.939868 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cdee7d3_6192_40f7_a787_8c5242f4975d.slice/crio-fb94d653adeed14c180042bbe461857758a13bde85692b2cfe3bdb58d3de74d4 WatchSource:0}: Error finding container fb94d653adeed14c180042bbe461857758a13bde85692b2cfe3bdb58d3de74d4: Status 404 returned error can't find the container with id fb94d653adeed14c180042bbe461857758a13bde85692b2cfe3bdb58d3de74d4 Jan 29 08:26:55 crc kubenswrapper[5017]: I0129 08:26:55.705924 5017 generic.go:334] "Generic (PLEG): container finished" podID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerID="6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937" exitCode=0 Jan 29 08:26:55 crc kubenswrapper[5017]: I0129 08:26:55.706018 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwm6f" event={"ID":"7cdee7d3-6192-40f7-a787-8c5242f4975d","Type":"ContainerDied","Data":"6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937"} Jan 29 08:26:55 crc kubenswrapper[5017]: I0129 08:26:55.706810 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwm6f" event={"ID":"7cdee7d3-6192-40f7-a787-8c5242f4975d","Type":"ContainerStarted","Data":"fb94d653adeed14c180042bbe461857758a13bde85692b2cfe3bdb58d3de74d4"} Jan 29 08:26:56 crc kubenswrapper[5017]: I0129 08:26:56.719130 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwm6f" event={"ID":"7cdee7d3-6192-40f7-a787-8c5242f4975d","Type":"ContainerStarted","Data":"02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029"} Jan 29 08:27:02 crc kubenswrapper[5017]: I0129 08:27:02.783338 5017 generic.go:334] "Generic (PLEG): container finished" podID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerID="02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029" exitCode=0 Jan 29 08:27:02 crc kubenswrapper[5017]: I0129 08:27:02.783428 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwm6f" event={"ID":"7cdee7d3-6192-40f7-a787-8c5242f4975d","Type":"ContainerDied","Data":"02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029"} Jan 29 08:27:03 crc kubenswrapper[5017]: I0129 08:27:03.797228 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwm6f" event={"ID":"7cdee7d3-6192-40f7-a787-8c5242f4975d","Type":"ContainerStarted","Data":"a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644"} Jan 29 08:27:03 crc kubenswrapper[5017]: I0129 08:27:03.835076 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hwm6f" podStartSLOduration=3.363419007 podStartE2EDuration="10.835043089s" podCreationTimestamp="2026-01-29 08:26:53 +0000 UTC" firstStartedPulling="2026-01-29 08:26:55.709301095 +0000 UTC m=+6702.083748705" lastFinishedPulling="2026-01-29 08:27:03.180925177 +0000 UTC m=+6709.555372787" observedRunningTime="2026-01-29 08:27:03.81849163 +0000 UTC m=+6710.192939250" watchObservedRunningTime="2026-01-29 08:27:03.835043089 +0000 UTC m=+6710.209490699" Jan 29 08:27:04 crc kubenswrapper[5017]: I0129 08:27:04.362837 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:27:04 crc kubenswrapper[5017]: I0129 08:27:04.363372 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:27:05 crc kubenswrapper[5017]: I0129 08:27:05.414941 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hwm6f" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerName="registry-server" probeResult="failure" output=< Jan 29 08:27:05 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:27:05 crc kubenswrapper[5017]: > Jan 29 08:27:14 crc kubenswrapper[5017]: I0129 08:27:14.419175 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:27:14 crc kubenswrapper[5017]: I0129 08:27:14.468779 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:27:14 crc kubenswrapper[5017]: I0129 08:27:14.657004 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwm6f"] Jan 29 08:27:15 crc kubenswrapper[5017]: I0129 08:27:15.946556 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hwm6f" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerName="registry-server" containerID="cri-o://a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644" gracePeriod=2 Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.427586 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.583837 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-utilities\") pod \"7cdee7d3-6192-40f7-a787-8c5242f4975d\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.584109 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx4ps\" (UniqueName: \"kubernetes.io/projected/7cdee7d3-6192-40f7-a787-8c5242f4975d-kube-api-access-cx4ps\") pod \"7cdee7d3-6192-40f7-a787-8c5242f4975d\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.584203 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-catalog-content\") pod \"7cdee7d3-6192-40f7-a787-8c5242f4975d\" (UID: \"7cdee7d3-6192-40f7-a787-8c5242f4975d\") " Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.584985 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-utilities" (OuterVolumeSpecName: "utilities") pod "7cdee7d3-6192-40f7-a787-8c5242f4975d" (UID: "7cdee7d3-6192-40f7-a787-8c5242f4975d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.590323 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdee7d3-6192-40f7-a787-8c5242f4975d-kube-api-access-cx4ps" (OuterVolumeSpecName: "kube-api-access-cx4ps") pod "7cdee7d3-6192-40f7-a787-8c5242f4975d" (UID: "7cdee7d3-6192-40f7-a787-8c5242f4975d"). InnerVolumeSpecName "kube-api-access-cx4ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.686925 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx4ps\" (UniqueName: \"kubernetes.io/projected/7cdee7d3-6192-40f7-a787-8c5242f4975d-kube-api-access-cx4ps\") on node \"crc\" DevicePath \"\"" Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.686992 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.716108 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cdee7d3-6192-40f7-a787-8c5242f4975d" (UID: "7cdee7d3-6192-40f7-a787-8c5242f4975d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.789622 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdee7d3-6192-40f7-a787-8c5242f4975d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.958199 5017 generic.go:334] "Generic (PLEG): container finished" podID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerID="a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644" exitCode=0 Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.958290 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwm6f" Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.958295 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwm6f" event={"ID":"7cdee7d3-6192-40f7-a787-8c5242f4975d","Type":"ContainerDied","Data":"a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644"} Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.960844 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwm6f" event={"ID":"7cdee7d3-6192-40f7-a787-8c5242f4975d","Type":"ContainerDied","Data":"fb94d653adeed14c180042bbe461857758a13bde85692b2cfe3bdb58d3de74d4"} Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.960969 5017 scope.go:117] "RemoveContainer" containerID="a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644" Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.998157 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwm6f"] Jan 29 08:27:16 crc kubenswrapper[5017]: I0129 08:27:16.998164 5017 scope.go:117] "RemoveContainer" containerID="02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029" Jan 29 08:27:17 crc kubenswrapper[5017]: I0129 08:27:17.014369 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hwm6f"] Jan 29 08:27:17 crc kubenswrapper[5017]: I0129 08:27:17.031464 5017 scope.go:117] "RemoveContainer" containerID="6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937" Jan 29 08:27:17 crc kubenswrapper[5017]: I0129 08:27:17.090853 5017 scope.go:117] "RemoveContainer" containerID="a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644" Jan 29 08:27:17 crc kubenswrapper[5017]: E0129 08:27:17.091541 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644\": container with ID starting with a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644 not found: ID does not exist" containerID="a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644" Jan 29 08:27:17 crc kubenswrapper[5017]: I0129 08:27:17.091585 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644"} err="failed to get container status \"a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644\": rpc error: code = NotFound desc = could not find container \"a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644\": container with ID starting with a19e4ca596936d59e9a31a121b7f6adda22b3dd7c52968d760815d511f4b5644 not found: ID does not exist" Jan 29 08:27:17 crc kubenswrapper[5017]: I0129 08:27:17.091621 5017 scope.go:117] "RemoveContainer" containerID="02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029" Jan 29 08:27:17 crc kubenswrapper[5017]: E0129 08:27:17.092438 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029\": container with ID starting with 02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029 not found: ID does not exist" containerID="02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029" Jan 29 08:27:17 crc kubenswrapper[5017]: I0129 08:27:17.092475 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029"} err="failed to get container status \"02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029\": rpc error: code = NotFound desc = could not find container \"02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029\": container with ID starting with 02cfa274ac8002cbb71dc4d0e119823e1ad2f79aea773a613550e2e72ad9a029 not found: ID does not exist" Jan 29 08:27:17 crc kubenswrapper[5017]: I0129 08:27:17.092497 5017 scope.go:117] "RemoveContainer" containerID="6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937" Jan 29 08:27:17 crc kubenswrapper[5017]: E0129 08:27:17.092848 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937\": container with ID starting with 6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937 not found: ID does not exist" containerID="6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937" Jan 29 08:27:17 crc kubenswrapper[5017]: I0129 08:27:17.092874 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937"} err="failed to get container status \"6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937\": rpc error: code = NotFound desc = could not find container \"6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937\": container with ID starting with 6930a7b13487410900d5f42b38a615b0a0263924c15d032a95217444dd1df937 not found: ID does not exist" Jan 29 08:27:18 crc kubenswrapper[5017]: I0129 08:27:18.329021 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" path="/var/lib/kubelet/pods/7cdee7d3-6192-40f7-a787-8c5242f4975d/volumes" Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.849170 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p56p8"] Jan 29 08:27:56 crc kubenswrapper[5017]: E0129 08:27:56.850756 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerName="registry-server" Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.850788 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerName="registry-server" Jan 29 08:27:56 crc kubenswrapper[5017]: E0129 08:27:56.850801 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerName="extract-utilities" Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.850809 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerName="extract-utilities" Jan 29 08:27:56 crc kubenswrapper[5017]: E0129 08:27:56.850854 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerName="extract-content" Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.850863 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerName="extract-content" Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.851160 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdee7d3-6192-40f7-a787-8c5242f4975d" containerName="registry-server" Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.853459 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.859495 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p56p8"] Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.953480 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-catalog-content\") pod \"community-operators-p56p8\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.953817 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-utilities\") pod \"community-operators-p56p8\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:56 crc kubenswrapper[5017]: I0129 08:27:56.954088 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4q4r\" (UniqueName: \"kubernetes.io/projected/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-kube-api-access-l4q4r\") pod \"community-operators-p56p8\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:57 crc kubenswrapper[5017]: I0129 08:27:57.055813 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-utilities\") pod \"community-operators-p56p8\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:57 crc kubenswrapper[5017]: I0129 08:27:57.056012 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4q4r\" (UniqueName: \"kubernetes.io/projected/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-kube-api-access-l4q4r\") pod \"community-operators-p56p8\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:57 crc kubenswrapper[5017]: I0129 08:27:57.056056 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-catalog-content\") pod \"community-operators-p56p8\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:57 crc kubenswrapper[5017]: I0129 08:27:57.056456 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-utilities\") pod \"community-operators-p56p8\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:57 crc kubenswrapper[5017]: I0129 08:27:57.056599 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-catalog-content\") pod \"community-operators-p56p8\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:57 crc kubenswrapper[5017]: I0129 08:27:57.080070 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4q4r\" (UniqueName: \"kubernetes.io/projected/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-kube-api-access-l4q4r\") pod \"community-operators-p56p8\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:57 crc kubenswrapper[5017]: I0129 08:27:57.218543 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:27:57 crc kubenswrapper[5017]: I0129 08:27:57.748845 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p56p8"] Jan 29 08:27:58 crc kubenswrapper[5017]: I0129 08:27:58.374188 5017 generic.go:334] "Generic (PLEG): container finished" podID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerID="3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b" exitCode=0 Jan 29 08:27:58 crc kubenswrapper[5017]: I0129 08:27:58.374320 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p56p8" event={"ID":"b03b1ca6-dbd1-4bd5-b87d-14f667dac621","Type":"ContainerDied","Data":"3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b"} Jan 29 08:27:58 crc kubenswrapper[5017]: I0129 08:27:58.374629 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p56p8" event={"ID":"b03b1ca6-dbd1-4bd5-b87d-14f667dac621","Type":"ContainerStarted","Data":"504eb98e70373bace88e07ac57bcc6894ef86acdea0631d48471d0aa96b6fd6f"} Jan 29 08:27:58 crc kubenswrapper[5017]: I0129 08:27:58.377401 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:27:59 crc kubenswrapper[5017]: I0129 08:27:59.389687 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p56p8" event={"ID":"b03b1ca6-dbd1-4bd5-b87d-14f667dac621","Type":"ContainerStarted","Data":"bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b"} Jan 29 08:28:00 crc kubenswrapper[5017]: I0129 08:28:00.402065 5017 generic.go:334] "Generic (PLEG): container finished" podID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerID="bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b" exitCode=0 Jan 29 08:28:00 crc kubenswrapper[5017]: I0129 08:28:00.402163 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p56p8" event={"ID":"b03b1ca6-dbd1-4bd5-b87d-14f667dac621","Type":"ContainerDied","Data":"bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b"} Jan 29 08:28:01 crc kubenswrapper[5017]: I0129 08:28:01.418397 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p56p8" event={"ID":"b03b1ca6-dbd1-4bd5-b87d-14f667dac621","Type":"ContainerStarted","Data":"2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d"} Jan 29 08:28:01 crc kubenswrapper[5017]: I0129 08:28:01.445449 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p56p8" podStartSLOduration=2.9480715010000003 podStartE2EDuration="5.445414453s" podCreationTimestamp="2026-01-29 08:27:56 +0000 UTC" firstStartedPulling="2026-01-29 08:27:58.377056337 +0000 UTC m=+6764.751503957" lastFinishedPulling="2026-01-29 08:28:00.874399299 +0000 UTC m=+6767.248846909" observedRunningTime="2026-01-29 08:28:01.435995506 +0000 UTC m=+6767.810443116" watchObservedRunningTime="2026-01-29 08:28:01.445414453 +0000 UTC m=+6767.819862063" Jan 29 08:28:07 crc kubenswrapper[5017]: I0129 08:28:07.220500 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:28:07 crc kubenswrapper[5017]: I0129 08:28:07.221496 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:28:07 crc kubenswrapper[5017]: I0129 08:28:07.271817 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:28:07 crc kubenswrapper[5017]: I0129 08:28:07.518383 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:28:07 crc kubenswrapper[5017]: I0129 08:28:07.565392 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p56p8"] Jan 29 08:28:09 crc kubenswrapper[5017]: I0129 08:28:09.487868 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p56p8" podUID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerName="registry-server" containerID="cri-o://2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d" gracePeriod=2 Jan 29 08:28:09 crc kubenswrapper[5017]: I0129 08:28:09.988870 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.077014 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-utilities\") pod \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.077412 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-catalog-content\") pod \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.077559 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4q4r\" (UniqueName: \"kubernetes.io/projected/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-kube-api-access-l4q4r\") pod \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\" (UID: \"b03b1ca6-dbd1-4bd5-b87d-14f667dac621\") " Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.078359 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-utilities" (OuterVolumeSpecName: "utilities") pod "b03b1ca6-dbd1-4bd5-b87d-14f667dac621" (UID: "b03b1ca6-dbd1-4bd5-b87d-14f667dac621"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.079501 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.088245 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-kube-api-access-l4q4r" (OuterVolumeSpecName: "kube-api-access-l4q4r") pod "b03b1ca6-dbd1-4bd5-b87d-14f667dac621" (UID: "b03b1ca6-dbd1-4bd5-b87d-14f667dac621"). InnerVolumeSpecName "kube-api-access-l4q4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.181942 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4q4r\" (UniqueName: \"kubernetes.io/projected/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-kube-api-access-l4q4r\") on node \"crc\" DevicePath \"\"" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.499731 5017 generic.go:334] "Generic (PLEG): container finished" podID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerID="2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d" exitCode=0 Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.499801 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p56p8" event={"ID":"b03b1ca6-dbd1-4bd5-b87d-14f667dac621","Type":"ContainerDied","Data":"2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d"} Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.499842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p56p8" event={"ID":"b03b1ca6-dbd1-4bd5-b87d-14f667dac621","Type":"ContainerDied","Data":"504eb98e70373bace88e07ac57bcc6894ef86acdea0631d48471d0aa96b6fd6f"} Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.499871 5017 scope.go:117] "RemoveContainer" containerID="2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.500093 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p56p8" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.528245 5017 scope.go:117] "RemoveContainer" containerID="bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.557648 5017 scope.go:117] "RemoveContainer" containerID="3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.598629 5017 scope.go:117] "RemoveContainer" containerID="2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d" Jan 29 08:28:10 crc kubenswrapper[5017]: E0129 08:28:10.599126 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d\": container with ID starting with 2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d not found: ID does not exist" containerID="2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.599179 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d"} err="failed to get container status \"2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d\": rpc error: code = NotFound desc = could not find container \"2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d\": container with ID starting with 2dfe774f1eaf868fb52b9b9e4a49f0f44e06b0861c96009634a8680144c7791d not found: ID does not exist" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.599210 5017 scope.go:117] "RemoveContainer" containerID="bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b" Jan 29 08:28:10 crc kubenswrapper[5017]: E0129 08:28:10.599711 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b\": container with ID starting with bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b not found: ID does not exist" containerID="bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.599738 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b"} err="failed to get container status \"bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b\": rpc error: code = NotFound desc = could not find container \"bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b\": container with ID starting with bb535d0fb80d5abb8990adbab9158049166700b4629211ff1bb31df93872dc7b not found: ID does not exist" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.599755 5017 scope.go:117] "RemoveContainer" containerID="3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b" Jan 29 08:28:10 crc kubenswrapper[5017]: E0129 08:28:10.600018 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b\": container with ID starting with 3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b not found: ID does not exist" containerID="3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.600044 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b"} err="failed to get container status \"3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b\": rpc error: code = NotFound desc = could not find container \"3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b\": container with ID starting with 3376e69acfe1bb50151cd16fc605c19d733d98d1072fa196d27c8dfcb73e549b not found: ID does not exist" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.605313 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b03b1ca6-dbd1-4bd5-b87d-14f667dac621" (UID: "b03b1ca6-dbd1-4bd5-b87d-14f667dac621"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.695891 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03b1ca6-dbd1-4bd5-b87d-14f667dac621-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.843813 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p56p8"] Jan 29 08:28:10 crc kubenswrapper[5017]: I0129 08:28:10.853889 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p56p8"] Jan 29 08:28:12 crc kubenswrapper[5017]: I0129 08:28:12.337700 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" path="/var/lib/kubelet/pods/b03b1ca6-dbd1-4bd5-b87d-14f667dac621/volumes" Jan 29 08:28:56 crc kubenswrapper[5017]: I0129 08:28:56.539201 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:28:56 crc kubenswrapper[5017]: I0129 08:28:56.540440 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:29:04 crc kubenswrapper[5017]: I0129 08:29:04.299791 5017 generic.go:334] "Generic (PLEG): container finished" podID="1c67db27-194c-43dd-ab29-0461e44ba417" containerID="06650dd66c5480d487412e48f2909ed03d8b28b3315ededdecf387d02f058735" exitCode=0 Jan 29 08:29:04 crc kubenswrapper[5017]: I0129 08:29:04.299882 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" event={"ID":"1c67db27-194c-43dd-ab29-0461e44ba417","Type":"ContainerDied","Data":"06650dd66c5480d487412e48f2909ed03d8b28b3315ededdecf387d02f058735"} Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.823270 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.894586 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ssh-key-openstack-cell1\") pod \"1c67db27-194c-43dd-ab29-0461e44ba417\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.894689 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qng4n\" (UniqueName: \"kubernetes.io/projected/1c67db27-194c-43dd-ab29-0461e44ba417-kube-api-access-qng4n\") pod \"1c67db27-194c-43dd-ab29-0461e44ba417\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.894774 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ceph\") pod \"1c67db27-194c-43dd-ab29-0461e44ba417\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.894937 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-tripleo-cleanup-combined-ca-bundle\") pod \"1c67db27-194c-43dd-ab29-0461e44ba417\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.895551 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-inventory\") pod \"1c67db27-194c-43dd-ab29-0461e44ba417\" (UID: \"1c67db27-194c-43dd-ab29-0461e44ba417\") " Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.903146 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c67db27-194c-43dd-ab29-0461e44ba417-kube-api-access-qng4n" (OuterVolumeSpecName: "kube-api-access-qng4n") pod "1c67db27-194c-43dd-ab29-0461e44ba417" (UID: "1c67db27-194c-43dd-ab29-0461e44ba417"). InnerVolumeSpecName "kube-api-access-qng4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.903165 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "1c67db27-194c-43dd-ab29-0461e44ba417" (UID: "1c67db27-194c-43dd-ab29-0461e44ba417"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.903249 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ceph" (OuterVolumeSpecName: "ceph") pod "1c67db27-194c-43dd-ab29-0461e44ba417" (UID: "1c67db27-194c-43dd-ab29-0461e44ba417"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.935368 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-inventory" (OuterVolumeSpecName: "inventory") pod "1c67db27-194c-43dd-ab29-0461e44ba417" (UID: "1c67db27-194c-43dd-ab29-0461e44ba417"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.937200 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1c67db27-194c-43dd-ab29-0461e44ba417" (UID: "1c67db27-194c-43dd-ab29-0461e44ba417"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.998441 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.998481 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qng4n\" (UniqueName: \"kubernetes.io/projected/1c67db27-194c-43dd-ab29-0461e44ba417-kube-api-access-qng4n\") on node \"crc\" DevicePath \"\"" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.998494 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.998506 5017 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:29:05 crc kubenswrapper[5017]: I0129 08:29:05.998529 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c67db27-194c-43dd-ab29-0461e44ba417-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:29:06 crc kubenswrapper[5017]: I0129 08:29:06.324943 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" Jan 29 08:29:06 crc kubenswrapper[5017]: I0129 08:29:06.337620 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb" event={"ID":"1c67db27-194c-43dd-ab29-0461e44ba417","Type":"ContainerDied","Data":"f4c956b4ae433e1bba64c9560ac03d8ffa8aa8043b790e6b8b933923faf8ccac"} Jan 29 08:29:06 crc kubenswrapper[5017]: I0129 08:29:06.337704 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4c956b4ae433e1bba64c9560ac03d8ffa8aa8043b790e6b8b933923faf8ccac" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.424136 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-wth7v"] Jan 29 08:29:07 crc kubenswrapper[5017]: E0129 08:29:07.425080 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c67db27-194c-43dd-ab29-0461e44ba417" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.425098 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c67db27-194c-43dd-ab29-0461e44ba417" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 29 08:29:07 crc kubenswrapper[5017]: E0129 08:29:07.425110 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerName="extract-utilities" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.425119 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerName="extract-utilities" Jan 29 08:29:07 crc kubenswrapper[5017]: E0129 08:29:07.425153 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerName="extract-content" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.425160 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerName="extract-content" Jan 29 08:29:07 crc kubenswrapper[5017]: E0129 08:29:07.425201 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerName="registry-server" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.425207 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerName="registry-server" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.425413 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c67db27-194c-43dd-ab29-0461e44ba417" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.425428 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03b1ca6-dbd1-4bd5-b87d-14f667dac621" containerName="registry-server" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.426397 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.433001 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.433758 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.433784 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.438615 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.441518 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-wth7v"] Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.533496 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.533622 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dz62\" (UniqueName: \"kubernetes.io/projected/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-kube-api-access-2dz62\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.533666 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-inventory\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.533715 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ceph\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.533807 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.635103 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.635219 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dz62\" (UniqueName: \"kubernetes.io/projected/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-kube-api-access-2dz62\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.635256 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-inventory\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.635305 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ceph\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.635331 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.652241 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-inventory\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.652341 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.652367 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ceph\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.652264 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.665926 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dz62\" (UniqueName: \"kubernetes.io/projected/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-kube-api-access-2dz62\") pod \"bootstrap-openstack-openstack-cell1-wth7v\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:07 crc kubenswrapper[5017]: I0129 08:29:07.750465 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:29:08 crc kubenswrapper[5017]: I0129 08:29:08.303684 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-wth7v"] Jan 29 08:29:08 crc kubenswrapper[5017]: I0129 08:29:08.348874 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" event={"ID":"a0b818ab-1e3f-47cd-b7b2-0953e0effa22","Type":"ContainerStarted","Data":"5ab73c7c645089eadaf7af3f9415d52e4136f8feb9143ae0865242f36a4a9921"} Jan 29 08:29:09 crc kubenswrapper[5017]: I0129 08:29:09.361825 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" event={"ID":"a0b818ab-1e3f-47cd-b7b2-0953e0effa22","Type":"ContainerStarted","Data":"dc7f275831a20ca92e6d92b0ad087f0325869cf646cb2fdc6ea57e96bfc353ce"} Jan 29 08:29:09 crc kubenswrapper[5017]: I0129 08:29:09.391716 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" podStartSLOduration=1.892159468 podStartE2EDuration="2.391684186s" podCreationTimestamp="2026-01-29 08:29:07 +0000 UTC" firstStartedPulling="2026-01-29 08:29:08.301805296 +0000 UTC m=+6834.676252906" lastFinishedPulling="2026-01-29 08:29:08.801330014 +0000 UTC m=+6835.175777624" observedRunningTime="2026-01-29 08:29:09.377368139 +0000 UTC m=+6835.751815769" watchObservedRunningTime="2026-01-29 08:29:09.391684186 +0000 UTC m=+6835.766131796" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.547644 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jcslc"] Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.551093 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.561292 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcslc"] Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.603852 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-utilities\") pod \"certified-operators-jcslc\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.604058 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-catalog-content\") pod \"certified-operators-jcslc\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.604177 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6vb2\" (UniqueName: \"kubernetes.io/projected/39f2f938-1280-4c12-a4d5-fdeb074e7073-kube-api-access-t6vb2\") pod \"certified-operators-jcslc\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.706842 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-utilities\") pod \"certified-operators-jcslc\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.707059 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-catalog-content\") pod \"certified-operators-jcslc\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.707221 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6vb2\" (UniqueName: \"kubernetes.io/projected/39f2f938-1280-4c12-a4d5-fdeb074e7073-kube-api-access-t6vb2\") pod \"certified-operators-jcslc\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.707558 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-utilities\") pod \"certified-operators-jcslc\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.707731 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-catalog-content\") pod \"certified-operators-jcslc\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.729085 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6vb2\" (UniqueName: \"kubernetes.io/projected/39f2f938-1280-4c12-a4d5-fdeb074e7073-kube-api-access-t6vb2\") pod \"certified-operators-jcslc\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:10 crc kubenswrapper[5017]: I0129 08:29:10.881564 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:11 crc kubenswrapper[5017]: I0129 08:29:11.435429 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jcslc"] Jan 29 08:29:12 crc kubenswrapper[5017]: I0129 08:29:12.394898 5017 generic.go:334] "Generic (PLEG): container finished" podID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerID="4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54" exitCode=0 Jan 29 08:29:12 crc kubenswrapper[5017]: I0129 08:29:12.394996 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcslc" event={"ID":"39f2f938-1280-4c12-a4d5-fdeb074e7073","Type":"ContainerDied","Data":"4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54"} Jan 29 08:29:12 crc kubenswrapper[5017]: I0129 08:29:12.395437 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcslc" event={"ID":"39f2f938-1280-4c12-a4d5-fdeb074e7073","Type":"ContainerStarted","Data":"4ce70f92a869b839c9a76258be505a46b118c856a50907f076c6a7b325e0eb47"} Jan 29 08:29:13 crc kubenswrapper[5017]: I0129 08:29:13.412207 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcslc" event={"ID":"39f2f938-1280-4c12-a4d5-fdeb074e7073","Type":"ContainerStarted","Data":"444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f"} Jan 29 08:29:14 crc kubenswrapper[5017]: I0129 08:29:14.423614 5017 generic.go:334] "Generic (PLEG): container finished" podID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerID="444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f" exitCode=0 Jan 29 08:29:14 crc kubenswrapper[5017]: I0129 08:29:14.424150 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcslc" event={"ID":"39f2f938-1280-4c12-a4d5-fdeb074e7073","Type":"ContainerDied","Data":"444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f"} Jan 29 08:29:15 crc kubenswrapper[5017]: I0129 08:29:15.436842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcslc" event={"ID":"39f2f938-1280-4c12-a4d5-fdeb074e7073","Type":"ContainerStarted","Data":"1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf"} Jan 29 08:29:15 crc kubenswrapper[5017]: I0129 08:29:15.455210 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jcslc" podStartSLOduration=2.990816943 podStartE2EDuration="5.455188929s" podCreationTimestamp="2026-01-29 08:29:10 +0000 UTC" firstStartedPulling="2026-01-29 08:29:12.397537071 +0000 UTC m=+6838.771984681" lastFinishedPulling="2026-01-29 08:29:14.861909057 +0000 UTC m=+6841.236356667" observedRunningTime="2026-01-29 08:29:15.454716818 +0000 UTC m=+6841.829164448" watchObservedRunningTime="2026-01-29 08:29:15.455188929 +0000 UTC m=+6841.829636539" Jan 29 08:29:20 crc kubenswrapper[5017]: I0129 08:29:20.882233 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:20 crc kubenswrapper[5017]: I0129 08:29:20.883126 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:20 crc kubenswrapper[5017]: I0129 08:29:20.940120 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:21 crc kubenswrapper[5017]: I0129 08:29:21.559866 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:21 crc kubenswrapper[5017]: I0129 08:29:21.612493 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jcslc"] Jan 29 08:29:23 crc kubenswrapper[5017]: I0129 08:29:23.521264 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jcslc" podUID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerName="registry-server" containerID="cri-o://1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf" gracePeriod=2 Jan 29 08:29:23 crc kubenswrapper[5017]: I0129 08:29:23.983102 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.135004 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6vb2\" (UniqueName: \"kubernetes.io/projected/39f2f938-1280-4c12-a4d5-fdeb074e7073-kube-api-access-t6vb2\") pod \"39f2f938-1280-4c12-a4d5-fdeb074e7073\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.135337 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-utilities\") pod \"39f2f938-1280-4c12-a4d5-fdeb074e7073\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.135503 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-catalog-content\") pod \"39f2f938-1280-4c12-a4d5-fdeb074e7073\" (UID: \"39f2f938-1280-4c12-a4d5-fdeb074e7073\") " Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.136415 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-utilities" (OuterVolumeSpecName: "utilities") pod "39f2f938-1280-4c12-a4d5-fdeb074e7073" (UID: "39f2f938-1280-4c12-a4d5-fdeb074e7073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.142927 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f2f938-1280-4c12-a4d5-fdeb074e7073-kube-api-access-t6vb2" (OuterVolumeSpecName: "kube-api-access-t6vb2") pod "39f2f938-1280-4c12-a4d5-fdeb074e7073" (UID: "39f2f938-1280-4c12-a4d5-fdeb074e7073"). InnerVolumeSpecName "kube-api-access-t6vb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.238431 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6vb2\" (UniqueName: \"kubernetes.io/projected/39f2f938-1280-4c12-a4d5-fdeb074e7073-kube-api-access-t6vb2\") on node \"crc\" DevicePath \"\"" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.238923 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.532063 5017 generic.go:334] "Generic (PLEG): container finished" podID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerID="1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf" exitCode=0 Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.532120 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcslc" event={"ID":"39f2f938-1280-4c12-a4d5-fdeb074e7073","Type":"ContainerDied","Data":"1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf"} Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.532159 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jcslc" event={"ID":"39f2f938-1280-4c12-a4d5-fdeb074e7073","Type":"ContainerDied","Data":"4ce70f92a869b839c9a76258be505a46b118c856a50907f076c6a7b325e0eb47"} Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.532184 5017 scope.go:117] "RemoveContainer" containerID="1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.532260 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jcslc" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.556639 5017 scope.go:117] "RemoveContainer" containerID="444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.582834 5017 scope.go:117] "RemoveContainer" containerID="4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.666893 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39f2f938-1280-4c12-a4d5-fdeb074e7073" (UID: "39f2f938-1280-4c12-a4d5-fdeb074e7073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.676399 5017 scope.go:117] "RemoveContainer" containerID="1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf" Jan 29 08:29:24 crc kubenswrapper[5017]: E0129 08:29:24.676918 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf\": container with ID starting with 1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf not found: ID does not exist" containerID="1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.677003 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf"} err="failed to get container status \"1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf\": rpc error: code = NotFound desc = could not find container \"1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf\": container with ID starting with 1e9a34b633fd940af9a3c8fda4153626db515a0df2dad6a72f966ecbd5d879cf not found: ID does not exist" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.677036 5017 scope.go:117] "RemoveContainer" containerID="444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f" Jan 29 08:29:24 crc kubenswrapper[5017]: E0129 08:29:24.677380 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f\": container with ID starting with 444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f not found: ID does not exist" containerID="444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.677408 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f"} err="failed to get container status \"444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f\": rpc error: code = NotFound desc = could not find container \"444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f\": container with ID starting with 444c529cfdc0c089c5bfc93140583ff5c717968918d2ef30c5ad7f33f8a34d1f not found: ID does not exist" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.677429 5017 scope.go:117] "RemoveContainer" containerID="4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54" Jan 29 08:29:24 crc kubenswrapper[5017]: E0129 08:29:24.677645 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54\": container with ID starting with 4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54 not found: ID does not exist" containerID="4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.677668 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54"} err="failed to get container status \"4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54\": rpc error: code = NotFound desc = could not find container \"4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54\": container with ID starting with 4444f8d8bbca27a1a28750c755c7e062688b8161afd1191ce7536f1afc246f54 not found: ID does not exist" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.762262 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39f2f938-1280-4c12-a4d5-fdeb074e7073-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.893494 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jcslc"] Jan 29 08:29:24 crc kubenswrapper[5017]: I0129 08:29:24.904164 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jcslc"] Jan 29 08:29:26 crc kubenswrapper[5017]: I0129 08:29:26.332802 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f2f938-1280-4c12-a4d5-fdeb074e7073" path="/var/lib/kubelet/pods/39f2f938-1280-4c12-a4d5-fdeb074e7073/volumes" Jan 29 08:29:26 crc kubenswrapper[5017]: I0129 08:29:26.539178 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:29:26 crc kubenswrapper[5017]: I0129 08:29:26.539256 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:29:56 crc kubenswrapper[5017]: I0129 08:29:56.539606 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:29:56 crc kubenswrapper[5017]: I0129 08:29:56.540527 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:29:56 crc kubenswrapper[5017]: I0129 08:29:56.540587 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:29:56 crc kubenswrapper[5017]: I0129 08:29:56.541669 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d088b7157d2d30237d05e92bd39f814bd550db374b4ca0b407df76504036dcce"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:29:56 crc kubenswrapper[5017]: I0129 08:29:56.541741 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://d088b7157d2d30237d05e92bd39f814bd550db374b4ca0b407df76504036dcce" gracePeriod=600 Jan 29 08:29:56 crc kubenswrapper[5017]: I0129 08:29:56.837049 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="d088b7157d2d30237d05e92bd39f814bd550db374b4ca0b407df76504036dcce" exitCode=0 Jan 29 08:29:56 crc kubenswrapper[5017]: I0129 08:29:56.837334 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"d088b7157d2d30237d05e92bd39f814bd550db374b4ca0b407df76504036dcce"} Jan 29 08:29:56 crc kubenswrapper[5017]: I0129 08:29:56.837508 5017 scope.go:117] "RemoveContainer" containerID="b3f6d683b9ddd8211c77a02801929dca4ce4b5bac12993877b84ee19e179bb46" Jan 29 08:29:57 crc kubenswrapper[5017]: I0129 08:29:57.849377 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7"} Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.147057 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk"] Jan 29 08:30:00 crc kubenswrapper[5017]: E0129 08:30:00.148631 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerName="extract-utilities" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.148651 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerName="extract-utilities" Jan 29 08:30:00 crc kubenswrapper[5017]: E0129 08:30:00.148693 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerName="extract-content" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.148705 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerName="extract-content" Jan 29 08:30:00 crc kubenswrapper[5017]: E0129 08:30:00.148735 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerName="registry-server" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.148744 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerName="registry-server" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.149019 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f2f938-1280-4c12-a4d5-fdeb074e7073" containerName="registry-server" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.150086 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.152634 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.154476 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.158090 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk"] Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.194335 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d8390-97e9-4232-ac5c-03b3e8b2a764-config-volume\") pod \"collect-profiles-29494590-h5srk\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.194462 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d8390-97e9-4232-ac5c-03b3e8b2a764-secret-volume\") pod \"collect-profiles-29494590-h5srk\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.194608 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ncn\" (UniqueName: \"kubernetes.io/projected/181d8390-97e9-4232-ac5c-03b3e8b2a764-kube-api-access-b8ncn\") pod \"collect-profiles-29494590-h5srk\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.296970 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d8390-97e9-4232-ac5c-03b3e8b2a764-config-volume\") pod \"collect-profiles-29494590-h5srk\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.297043 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d8390-97e9-4232-ac5c-03b3e8b2a764-secret-volume\") pod \"collect-profiles-29494590-h5srk\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.297129 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ncn\" (UniqueName: \"kubernetes.io/projected/181d8390-97e9-4232-ac5c-03b3e8b2a764-kube-api-access-b8ncn\") pod \"collect-profiles-29494590-h5srk\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.298422 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d8390-97e9-4232-ac5c-03b3e8b2a764-config-volume\") pod \"collect-profiles-29494590-h5srk\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.305235 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d8390-97e9-4232-ac5c-03b3e8b2a764-secret-volume\") pod \"collect-profiles-29494590-h5srk\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.316406 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ncn\" (UniqueName: \"kubernetes.io/projected/181d8390-97e9-4232-ac5c-03b3e8b2a764-kube-api-access-b8ncn\") pod \"collect-profiles-29494590-h5srk\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.488546 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:00 crc kubenswrapper[5017]: I0129 08:30:00.952460 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk"] Jan 29 08:30:00 crc kubenswrapper[5017]: W0129 08:30:00.958048 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod181d8390_97e9_4232_ac5c_03b3e8b2a764.slice/crio-9b50825e1f1099f6a9ddc81f8355232b49da2a14730970cdb87e3832858f8cc6 WatchSource:0}: Error finding container 9b50825e1f1099f6a9ddc81f8355232b49da2a14730970cdb87e3832858f8cc6: Status 404 returned error can't find the container with id 9b50825e1f1099f6a9ddc81f8355232b49da2a14730970cdb87e3832858f8cc6 Jan 29 08:30:01 crc kubenswrapper[5017]: I0129 08:30:01.887191 5017 generic.go:334] "Generic (PLEG): container finished" podID="181d8390-97e9-4232-ac5c-03b3e8b2a764" containerID="786814bc25be5c905f20935240824afd2409c1479e76ac5403a6dd1f0b789a61" exitCode=0 Jan 29 08:30:01 crc kubenswrapper[5017]: I0129 08:30:01.887248 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" event={"ID":"181d8390-97e9-4232-ac5c-03b3e8b2a764","Type":"ContainerDied","Data":"786814bc25be5c905f20935240824afd2409c1479e76ac5403a6dd1f0b789a61"} Jan 29 08:30:01 crc kubenswrapper[5017]: I0129 08:30:01.887899 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" event={"ID":"181d8390-97e9-4232-ac5c-03b3e8b2a764","Type":"ContainerStarted","Data":"9b50825e1f1099f6a9ddc81f8355232b49da2a14730970cdb87e3832858f8cc6"} Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.247233 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.266189 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d8390-97e9-4232-ac5c-03b3e8b2a764-config-volume\") pod \"181d8390-97e9-4232-ac5c-03b3e8b2a764\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.266626 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8ncn\" (UniqueName: \"kubernetes.io/projected/181d8390-97e9-4232-ac5c-03b3e8b2a764-kube-api-access-b8ncn\") pod \"181d8390-97e9-4232-ac5c-03b3e8b2a764\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.266666 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d8390-97e9-4232-ac5c-03b3e8b2a764-secret-volume\") pod \"181d8390-97e9-4232-ac5c-03b3e8b2a764\" (UID: \"181d8390-97e9-4232-ac5c-03b3e8b2a764\") " Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.267290 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/181d8390-97e9-4232-ac5c-03b3e8b2a764-config-volume" (OuterVolumeSpecName: "config-volume") pod "181d8390-97e9-4232-ac5c-03b3e8b2a764" (UID: "181d8390-97e9-4232-ac5c-03b3e8b2a764"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.274901 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181d8390-97e9-4232-ac5c-03b3e8b2a764-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "181d8390-97e9-4232-ac5c-03b3e8b2a764" (UID: "181d8390-97e9-4232-ac5c-03b3e8b2a764"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.275313 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181d8390-97e9-4232-ac5c-03b3e8b2a764-kube-api-access-b8ncn" (OuterVolumeSpecName: "kube-api-access-b8ncn") pod "181d8390-97e9-4232-ac5c-03b3e8b2a764" (UID: "181d8390-97e9-4232-ac5c-03b3e8b2a764"). InnerVolumeSpecName "kube-api-access-b8ncn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.370062 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/181d8390-97e9-4232-ac5c-03b3e8b2a764-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.370116 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181d8390-97e9-4232-ac5c-03b3e8b2a764-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.370131 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8ncn\" (UniqueName: \"kubernetes.io/projected/181d8390-97e9-4232-ac5c-03b3e8b2a764-kube-api-access-b8ncn\") on node \"crc\" DevicePath \"\"" Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.906873 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" event={"ID":"181d8390-97e9-4232-ac5c-03b3e8b2a764","Type":"ContainerDied","Data":"9b50825e1f1099f6a9ddc81f8355232b49da2a14730970cdb87e3832858f8cc6"} Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.907385 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b50825e1f1099f6a9ddc81f8355232b49da2a14730970cdb87e3832858f8cc6" Jan 29 08:30:03 crc kubenswrapper[5017]: I0129 08:30:03.907455 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk" Jan 29 08:30:04 crc kubenswrapper[5017]: I0129 08:30:04.330655 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb"] Jan 29 08:30:04 crc kubenswrapper[5017]: I0129 08:30:04.337512 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-mhxnb"] Jan 29 08:30:06 crc kubenswrapper[5017]: I0129 08:30:06.335987 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e1fed1-9cd5-4da5-8b48-390307883cff" path="/var/lib/kubelet/pods/d3e1fed1-9cd5-4da5-8b48-390307883cff/volumes" Jan 29 08:30:37 crc kubenswrapper[5017]: I0129 08:30:37.222171 5017 scope.go:117] "RemoveContainer" containerID="c98aee6ff14f228c2ae16b620bccf2f7b7af5686620b8ff627b8ee49dba7b7a5" Jan 29 08:31:56 crc kubenswrapper[5017]: I0129 08:31:56.539170 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:31:56 crc kubenswrapper[5017]: I0129 08:31:56.540274 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:32:26 crc kubenswrapper[5017]: I0129 08:32:26.538990 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:32:26 crc kubenswrapper[5017]: I0129 08:32:26.539982 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:32:29 crc kubenswrapper[5017]: I0129 08:32:29.413390 5017 generic.go:334] "Generic (PLEG): container finished" podID="a0b818ab-1e3f-47cd-b7b2-0953e0effa22" containerID="dc7f275831a20ca92e6d92b0ad087f0325869cf646cb2fdc6ea57e96bfc353ce" exitCode=0 Jan 29 08:32:29 crc kubenswrapper[5017]: I0129 08:32:29.413487 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" event={"ID":"a0b818ab-1e3f-47cd-b7b2-0953e0effa22","Type":"ContainerDied","Data":"dc7f275831a20ca92e6d92b0ad087f0325869cf646cb2fdc6ea57e96bfc353ce"} Jan 29 08:32:30 crc kubenswrapper[5017]: I0129 08:32:30.885715 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:32:30 crc kubenswrapper[5017]: I0129 08:32:30.976187 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-bootstrap-combined-ca-bundle\") pod \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " Jan 29 08:32:30 crc kubenswrapper[5017]: I0129 08:32:30.976264 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ceph\") pod \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " Jan 29 08:32:30 crc kubenswrapper[5017]: I0129 08:32:30.976298 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ssh-key-openstack-cell1\") pod \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " Jan 29 08:32:30 crc kubenswrapper[5017]: I0129 08:32:30.976351 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dz62\" (UniqueName: \"kubernetes.io/projected/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-kube-api-access-2dz62\") pod \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " Jan 29 08:32:30 crc kubenswrapper[5017]: I0129 08:32:30.976591 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-inventory\") pod \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\" (UID: \"a0b818ab-1e3f-47cd-b7b2-0953e0effa22\") " Jan 29 08:32:30 crc kubenswrapper[5017]: I0129 08:32:30.984353 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ceph" (OuterVolumeSpecName: "ceph") pod "a0b818ab-1e3f-47cd-b7b2-0953e0effa22" (UID: "a0b818ab-1e3f-47cd-b7b2-0953e0effa22"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:32:30 crc kubenswrapper[5017]: I0129 08:32:30.984642 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-kube-api-access-2dz62" (OuterVolumeSpecName: "kube-api-access-2dz62") pod "a0b818ab-1e3f-47cd-b7b2-0953e0effa22" (UID: "a0b818ab-1e3f-47cd-b7b2-0953e0effa22"). InnerVolumeSpecName "kube-api-access-2dz62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:32:30 crc kubenswrapper[5017]: I0129 08:32:30.985135 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a0b818ab-1e3f-47cd-b7b2-0953e0effa22" (UID: "a0b818ab-1e3f-47cd-b7b2-0953e0effa22"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.012817 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-inventory" (OuterVolumeSpecName: "inventory") pod "a0b818ab-1e3f-47cd-b7b2-0953e0effa22" (UID: "a0b818ab-1e3f-47cd-b7b2-0953e0effa22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.014952 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "a0b818ab-1e3f-47cd-b7b2-0953e0effa22" (UID: "a0b818ab-1e3f-47cd-b7b2-0953e0effa22"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.080116 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.080282 5017 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.080373 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.080489 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.080573 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dz62\" (UniqueName: \"kubernetes.io/projected/a0b818ab-1e3f-47cd-b7b2-0953e0effa22-kube-api-access-2dz62\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.436285 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" event={"ID":"a0b818ab-1e3f-47cd-b7b2-0953e0effa22","Type":"ContainerDied","Data":"5ab73c7c645089eadaf7af3f9415d52e4136f8feb9143ae0865242f36a4a9921"} Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.436341 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab73c7c645089eadaf7af3f9415d52e4136f8feb9143ae0865242f36a4a9921" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.436354 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-wth7v" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.539249 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-t2gng"] Jan 29 08:32:31 crc kubenswrapper[5017]: E0129 08:32:31.539740 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b818ab-1e3f-47cd-b7b2-0953e0effa22" containerName="bootstrap-openstack-openstack-cell1" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.539761 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b818ab-1e3f-47cd-b7b2-0953e0effa22" containerName="bootstrap-openstack-openstack-cell1" Jan 29 08:32:31 crc kubenswrapper[5017]: E0129 08:32:31.539804 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="181d8390-97e9-4232-ac5c-03b3e8b2a764" containerName="collect-profiles" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.539811 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="181d8390-97e9-4232-ac5c-03b3e8b2a764" containerName="collect-profiles" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.540080 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b818ab-1e3f-47cd-b7b2-0953e0effa22" containerName="bootstrap-openstack-openstack-cell1" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.540099 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="181d8390-97e9-4232-ac5c-03b3e8b2a764" containerName="collect-profiles" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.540988 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.546023 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.546235 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.546287 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.546519 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.553539 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-t2gng"] Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.694624 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgfnp\" (UniqueName: \"kubernetes.io/projected/af57877d-2918-40e4-b104-b1fb93121850-kube-api-access-cgfnp\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.695256 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-inventory\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.695347 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.695415 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ceph\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.797074 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ceph\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.797222 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgfnp\" (UniqueName: \"kubernetes.io/projected/af57877d-2918-40e4-b104-b1fb93121850-kube-api-access-cgfnp\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.797573 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-inventory\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.797650 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.803113 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-inventory\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.807115 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ceph\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.807192 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.817652 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgfnp\" (UniqueName: \"kubernetes.io/projected/af57877d-2918-40e4-b104-b1fb93121850-kube-api-access-cgfnp\") pod \"download-cache-openstack-openstack-cell1-t2gng\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:31 crc kubenswrapper[5017]: I0129 08:32:31.863507 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:32:32 crc kubenswrapper[5017]: I0129 08:32:32.429099 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-t2gng"] Jan 29 08:32:32 crc kubenswrapper[5017]: I0129 08:32:32.447659 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-t2gng" event={"ID":"af57877d-2918-40e4-b104-b1fb93121850","Type":"ContainerStarted","Data":"5c7638d2493d68af3703bd36fdc15699d99fe3bbd4020f8177b177617d1f4f40"} Jan 29 08:32:33 crc kubenswrapper[5017]: I0129 08:32:33.459842 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-t2gng" event={"ID":"af57877d-2918-40e4-b104-b1fb93121850","Type":"ContainerStarted","Data":"84bdaa4dd87fd50b11ed46867c35e351c9944524d47d7de40fefa8945221ad4a"} Jan 29 08:32:33 crc kubenswrapper[5017]: I0129 08:32:33.483679 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-t2gng" podStartSLOduration=1.853781592 podStartE2EDuration="2.483649499s" podCreationTimestamp="2026-01-29 08:32:31 +0000 UTC" firstStartedPulling="2026-01-29 08:32:32.434148934 +0000 UTC m=+7038.808596544" lastFinishedPulling="2026-01-29 08:32:33.064016831 +0000 UTC m=+7039.438464451" observedRunningTime="2026-01-29 08:32:33.478787402 +0000 UTC m=+7039.853235012" watchObservedRunningTime="2026-01-29 08:32:33.483649499 +0000 UTC m=+7039.858097109" Jan 29 08:32:56 crc kubenswrapper[5017]: I0129 08:32:56.539289 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:32:56 crc kubenswrapper[5017]: I0129 08:32:56.540161 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:32:56 crc kubenswrapper[5017]: I0129 08:32:56.540217 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:32:56 crc kubenswrapper[5017]: I0129 08:32:56.541297 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:32:56 crc kubenswrapper[5017]: I0129 08:32:56.541365 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" gracePeriod=600 Jan 29 08:32:56 crc kubenswrapper[5017]: E0129 08:32:56.685614 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:32:56 crc kubenswrapper[5017]: I0129 08:32:56.731268 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7"} Jan 29 08:32:56 crc kubenswrapper[5017]: I0129 08:32:56.731222 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" exitCode=0 Jan 29 08:32:56 crc kubenswrapper[5017]: I0129 08:32:56.731330 5017 scope.go:117] "RemoveContainer" containerID="d088b7157d2d30237d05e92bd39f814bd550db374b4ca0b407df76504036dcce" Jan 29 08:32:56 crc kubenswrapper[5017]: I0129 08:32:56.732749 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:32:56 crc kubenswrapper[5017]: E0129 08:32:56.737542 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:33:09 crc kubenswrapper[5017]: I0129 08:33:09.317161 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:33:09 crc kubenswrapper[5017]: E0129 08:33:09.318320 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:33:22 crc kubenswrapper[5017]: I0129 08:33:22.317152 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:33:22 crc kubenswrapper[5017]: E0129 08:33:22.318486 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:33:33 crc kubenswrapper[5017]: I0129 08:33:33.316927 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:33:33 crc kubenswrapper[5017]: E0129 08:33:33.318112 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:33:47 crc kubenswrapper[5017]: I0129 08:33:47.316638 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:33:47 crc kubenswrapper[5017]: E0129 08:33:47.317736 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:34:01 crc kubenswrapper[5017]: I0129 08:34:01.297252 5017 generic.go:334] "Generic (PLEG): container finished" podID="af57877d-2918-40e4-b104-b1fb93121850" containerID="84bdaa4dd87fd50b11ed46867c35e351c9944524d47d7de40fefa8945221ad4a" exitCode=0 Jan 29 08:34:01 crc kubenswrapper[5017]: I0129 08:34:01.297310 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-t2gng" event={"ID":"af57877d-2918-40e4-b104-b1fb93121850","Type":"ContainerDied","Data":"84bdaa4dd87fd50b11ed46867c35e351c9944524d47d7de40fefa8945221ad4a"} Jan 29 08:34:01 crc kubenswrapper[5017]: I0129 08:34:01.322630 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:34:01 crc kubenswrapper[5017]: E0129 08:34:01.322975 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:34:02 crc kubenswrapper[5017]: I0129 08:34:02.781517 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:34:02 crc kubenswrapper[5017]: I0129 08:34:02.900180 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ceph\") pod \"af57877d-2918-40e4-b104-b1fb93121850\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " Jan 29 08:34:02 crc kubenswrapper[5017]: I0129 08:34:02.900328 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-inventory\") pod \"af57877d-2918-40e4-b104-b1fb93121850\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " Jan 29 08:34:02 crc kubenswrapper[5017]: I0129 08:34:02.900493 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ssh-key-openstack-cell1\") pod \"af57877d-2918-40e4-b104-b1fb93121850\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " Jan 29 08:34:02 crc kubenswrapper[5017]: I0129 08:34:02.900604 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgfnp\" (UniqueName: \"kubernetes.io/projected/af57877d-2918-40e4-b104-b1fb93121850-kube-api-access-cgfnp\") pod \"af57877d-2918-40e4-b104-b1fb93121850\" (UID: \"af57877d-2918-40e4-b104-b1fb93121850\") " Jan 29 08:34:02 crc kubenswrapper[5017]: I0129 08:34:02.908198 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ceph" (OuterVolumeSpecName: "ceph") pod "af57877d-2918-40e4-b104-b1fb93121850" (UID: "af57877d-2918-40e4-b104-b1fb93121850"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:34:02 crc kubenswrapper[5017]: I0129 08:34:02.908380 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af57877d-2918-40e4-b104-b1fb93121850-kube-api-access-cgfnp" (OuterVolumeSpecName: "kube-api-access-cgfnp") pod "af57877d-2918-40e4-b104-b1fb93121850" (UID: "af57877d-2918-40e4-b104-b1fb93121850"). InnerVolumeSpecName "kube-api-access-cgfnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:34:02 crc kubenswrapper[5017]: I0129 08:34:02.937832 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "af57877d-2918-40e4-b104-b1fb93121850" (UID: "af57877d-2918-40e4-b104-b1fb93121850"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:34:02 crc kubenswrapper[5017]: I0129 08:34:02.937900 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-inventory" (OuterVolumeSpecName: "inventory") pod "af57877d-2918-40e4-b104-b1fb93121850" (UID: "af57877d-2918-40e4-b104-b1fb93121850"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.006383 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.006441 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgfnp\" (UniqueName: \"kubernetes.io/projected/af57877d-2918-40e4-b104-b1fb93121850-kube-api-access-cgfnp\") on node \"crc\" DevicePath \"\"" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.006451 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.006461 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af57877d-2918-40e4-b104-b1fb93121850-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.321273 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-t2gng" event={"ID":"af57877d-2918-40e4-b104-b1fb93121850","Type":"ContainerDied","Data":"5c7638d2493d68af3703bd36fdc15699d99fe3bbd4020f8177b177617d1f4f40"} Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.321325 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c7638d2493d68af3703bd36fdc15699d99fe3bbd4020f8177b177617d1f4f40" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.321363 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-t2gng" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.429928 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-kp2gq"] Jan 29 08:34:03 crc kubenswrapper[5017]: E0129 08:34:03.431059 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af57877d-2918-40e4-b104-b1fb93121850" containerName="download-cache-openstack-openstack-cell1" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.431086 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="af57877d-2918-40e4-b104-b1fb93121850" containerName="download-cache-openstack-openstack-cell1" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.431409 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="af57877d-2918-40e4-b104-b1fb93121850" containerName="download-cache-openstack-openstack-cell1" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.432515 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.435777 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.436109 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.436236 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.436361 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.437326 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-kp2gq"] Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.520532 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.520607 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tv2c\" (UniqueName: \"kubernetes.io/projected/fa9d8b30-7463-4b9e-8d40-87b3091a5869-kube-api-access-4tv2c\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.520683 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ceph\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.520738 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-inventory\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.623441 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ceph\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.623560 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-inventory\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.623764 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.623817 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tv2c\" (UniqueName: \"kubernetes.io/projected/fa9d8b30-7463-4b9e-8d40-87b3091a5869-kube-api-access-4tv2c\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.628824 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ceph\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.629825 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.630703 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-inventory\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.643907 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tv2c\" (UniqueName: \"kubernetes.io/projected/fa9d8b30-7463-4b9e-8d40-87b3091a5869-kube-api-access-4tv2c\") pod \"configure-network-openstack-openstack-cell1-kp2gq\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:03 crc kubenswrapper[5017]: I0129 08:34:03.795726 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:34:04 crc kubenswrapper[5017]: I0129 08:34:04.498106 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-kp2gq"] Jan 29 08:34:04 crc kubenswrapper[5017]: I0129 08:34:04.512028 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.367402 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" event={"ID":"fa9d8b30-7463-4b9e-8d40-87b3091a5869","Type":"ContainerStarted","Data":"8a49e254d2f7011a499c6f0d4f68c671c2cc658924c3ab56d9f1fc559085943c"} Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.630948 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jvdt4"] Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.633430 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvdt4"] Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.633531 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.776401 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-utilities\") pod \"redhat-marketplace-jvdt4\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.776575 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2rdx\" (UniqueName: \"kubernetes.io/projected/f7b7af55-32cc-4352-b951-4d6589e83b87-kube-api-access-r2rdx\") pod \"redhat-marketplace-jvdt4\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.776769 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-catalog-content\") pod \"redhat-marketplace-jvdt4\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.879039 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2rdx\" (UniqueName: \"kubernetes.io/projected/f7b7af55-32cc-4352-b951-4d6589e83b87-kube-api-access-r2rdx\") pod \"redhat-marketplace-jvdt4\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.879748 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-catalog-content\") pod \"redhat-marketplace-jvdt4\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.880021 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-utilities\") pod \"redhat-marketplace-jvdt4\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.880519 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-catalog-content\") pod \"redhat-marketplace-jvdt4\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.880518 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-utilities\") pod \"redhat-marketplace-jvdt4\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.900295 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2rdx\" (UniqueName: \"kubernetes.io/projected/f7b7af55-32cc-4352-b951-4d6589e83b87-kube-api-access-r2rdx\") pod \"redhat-marketplace-jvdt4\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:05 crc kubenswrapper[5017]: I0129 08:34:05.989049 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:06 crc kubenswrapper[5017]: I0129 08:34:06.387228 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" event={"ID":"fa9d8b30-7463-4b9e-8d40-87b3091a5869","Type":"ContainerStarted","Data":"97b835dbd6fbc1ad9e3abd19f5e37be509e81332e2e065b2ecf49dd6e52c427f"} Jan 29 08:34:06 crc kubenswrapper[5017]: I0129 08:34:06.413074 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" podStartSLOduration=2.746761789 podStartE2EDuration="3.412326291s" podCreationTimestamp="2026-01-29 08:34:03 +0000 UTC" firstStartedPulling="2026-01-29 08:34:04.51164215 +0000 UTC m=+7130.886089760" lastFinishedPulling="2026-01-29 08:34:05.177206652 +0000 UTC m=+7131.551654262" observedRunningTime="2026-01-29 08:34:06.410144008 +0000 UTC m=+7132.784591618" watchObservedRunningTime="2026-01-29 08:34:06.412326291 +0000 UTC m=+7132.786773901" Jan 29 08:34:06 crc kubenswrapper[5017]: I0129 08:34:06.560904 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvdt4"] Jan 29 08:34:06 crc kubenswrapper[5017]: W0129 08:34:06.563938 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7b7af55_32cc_4352_b951_4d6589e83b87.slice/crio-aad189a762a281941937af1caa31fc8283bfe22a63d5d670509d8f213472dc47 WatchSource:0}: Error finding container aad189a762a281941937af1caa31fc8283bfe22a63d5d670509d8f213472dc47: Status 404 returned error can't find the container with id aad189a762a281941937af1caa31fc8283bfe22a63d5d670509d8f213472dc47 Jan 29 08:34:07 crc kubenswrapper[5017]: I0129 08:34:07.403300 5017 generic.go:334] "Generic (PLEG): container finished" podID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerID="8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c" exitCode=0 Jan 29 08:34:07 crc kubenswrapper[5017]: I0129 08:34:07.403461 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvdt4" event={"ID":"f7b7af55-32cc-4352-b951-4d6589e83b87","Type":"ContainerDied","Data":"8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c"} Jan 29 08:34:07 crc kubenswrapper[5017]: I0129 08:34:07.404192 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvdt4" event={"ID":"f7b7af55-32cc-4352-b951-4d6589e83b87","Type":"ContainerStarted","Data":"aad189a762a281941937af1caa31fc8283bfe22a63d5d670509d8f213472dc47"} Jan 29 08:34:08 crc kubenswrapper[5017]: I0129 08:34:08.414303 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvdt4" event={"ID":"f7b7af55-32cc-4352-b951-4d6589e83b87","Type":"ContainerStarted","Data":"5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef"} Jan 29 08:34:09 crc kubenswrapper[5017]: I0129 08:34:09.425394 5017 generic.go:334] "Generic (PLEG): container finished" podID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerID="5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef" exitCode=0 Jan 29 08:34:09 crc kubenswrapper[5017]: I0129 08:34:09.425448 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvdt4" event={"ID":"f7b7af55-32cc-4352-b951-4d6589e83b87","Type":"ContainerDied","Data":"5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef"} Jan 29 08:34:10 crc kubenswrapper[5017]: I0129 08:34:10.438813 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvdt4" event={"ID":"f7b7af55-32cc-4352-b951-4d6589e83b87","Type":"ContainerStarted","Data":"8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2"} Jan 29 08:34:10 crc kubenswrapper[5017]: I0129 08:34:10.465145 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jvdt4" podStartSLOduration=2.996459439 podStartE2EDuration="5.465113729s" podCreationTimestamp="2026-01-29 08:34:05 +0000 UTC" firstStartedPulling="2026-01-29 08:34:07.406724267 +0000 UTC m=+7133.781171877" lastFinishedPulling="2026-01-29 08:34:09.875378557 +0000 UTC m=+7136.249826167" observedRunningTime="2026-01-29 08:34:10.456384025 +0000 UTC m=+7136.830831635" watchObservedRunningTime="2026-01-29 08:34:10.465113729 +0000 UTC m=+7136.839561339" Jan 29 08:34:13 crc kubenswrapper[5017]: I0129 08:34:13.316237 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:34:13 crc kubenswrapper[5017]: E0129 08:34:13.316656 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:34:15 crc kubenswrapper[5017]: I0129 08:34:15.989887 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:15 crc kubenswrapper[5017]: I0129 08:34:15.990821 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:16 crc kubenswrapper[5017]: I0129 08:34:16.038306 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:16 crc kubenswrapper[5017]: I0129 08:34:16.562645 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:16 crc kubenswrapper[5017]: I0129 08:34:16.624461 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvdt4"] Jan 29 08:34:18 crc kubenswrapper[5017]: I0129 08:34:18.520818 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jvdt4" podUID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerName="registry-server" containerID="cri-o://8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2" gracePeriod=2 Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.039930 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.219120 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-utilities\") pod \"f7b7af55-32cc-4352-b951-4d6589e83b87\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.219351 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-catalog-content\") pod \"f7b7af55-32cc-4352-b951-4d6589e83b87\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.219389 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2rdx\" (UniqueName: \"kubernetes.io/projected/f7b7af55-32cc-4352-b951-4d6589e83b87-kube-api-access-r2rdx\") pod \"f7b7af55-32cc-4352-b951-4d6589e83b87\" (UID: \"f7b7af55-32cc-4352-b951-4d6589e83b87\") " Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.220498 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-utilities" (OuterVolumeSpecName: "utilities") pod "f7b7af55-32cc-4352-b951-4d6589e83b87" (UID: "f7b7af55-32cc-4352-b951-4d6589e83b87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.233647 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b7af55-32cc-4352-b951-4d6589e83b87-kube-api-access-r2rdx" (OuterVolumeSpecName: "kube-api-access-r2rdx") pod "f7b7af55-32cc-4352-b951-4d6589e83b87" (UID: "f7b7af55-32cc-4352-b951-4d6589e83b87"). InnerVolumeSpecName "kube-api-access-r2rdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.249658 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7b7af55-32cc-4352-b951-4d6589e83b87" (UID: "f7b7af55-32cc-4352-b951-4d6589e83b87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.322943 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.323083 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b7af55-32cc-4352-b951-4d6589e83b87-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.323099 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2rdx\" (UniqueName: \"kubernetes.io/projected/f7b7af55-32cc-4352-b951-4d6589e83b87-kube-api-access-r2rdx\") on node \"crc\" DevicePath \"\"" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.538255 5017 generic.go:334] "Generic (PLEG): container finished" podID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerID="8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2" exitCode=0 Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.538314 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvdt4" event={"ID":"f7b7af55-32cc-4352-b951-4d6589e83b87","Type":"ContainerDied","Data":"8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2"} Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.538350 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvdt4" event={"ID":"f7b7af55-32cc-4352-b951-4d6589e83b87","Type":"ContainerDied","Data":"aad189a762a281941937af1caa31fc8283bfe22a63d5d670509d8f213472dc47"} Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.538370 5017 scope.go:117] "RemoveContainer" containerID="8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.538532 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvdt4" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.577751 5017 scope.go:117] "RemoveContainer" containerID="5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.586617 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvdt4"] Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.606056 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvdt4"] Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.608550 5017 scope.go:117] "RemoveContainer" containerID="8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.664816 5017 scope.go:117] "RemoveContainer" containerID="8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2" Jan 29 08:34:19 crc kubenswrapper[5017]: E0129 08:34:19.665897 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2\": container with ID starting with 8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2 not found: ID does not exist" containerID="8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.665969 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2"} err="failed to get container status \"8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2\": rpc error: code = NotFound desc = could not find container \"8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2\": container with ID starting with 8a55ebad1c2cbd252841c01ed4171ea2fde20c9eed097d72eca05917186305d2 not found: ID does not exist" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.666000 5017 scope.go:117] "RemoveContainer" containerID="5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef" Jan 29 08:34:19 crc kubenswrapper[5017]: E0129 08:34:19.666339 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef\": container with ID starting with 5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef not found: ID does not exist" containerID="5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.666372 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef"} err="failed to get container status \"5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef\": rpc error: code = NotFound desc = could not find container \"5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef\": container with ID starting with 5aa0844c16b6120fb1e135e59339c5a45b33de8e6c5698cabc003bdcf7d966ef not found: ID does not exist" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.666387 5017 scope.go:117] "RemoveContainer" containerID="8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c" Jan 29 08:34:19 crc kubenswrapper[5017]: E0129 08:34:19.667084 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c\": container with ID starting with 8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c not found: ID does not exist" containerID="8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c" Jan 29 08:34:19 crc kubenswrapper[5017]: I0129 08:34:19.667173 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c"} err="failed to get container status \"8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c\": rpc error: code = NotFound desc = could not find container \"8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c\": container with ID starting with 8924979e17036a52be9ea454d2fdced5a4075b40eada59be387ca9013009d09c not found: ID does not exist" Jan 29 08:34:20 crc kubenswrapper[5017]: I0129 08:34:20.328888 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7b7af55-32cc-4352-b951-4d6589e83b87" path="/var/lib/kubelet/pods/f7b7af55-32cc-4352-b951-4d6589e83b87/volumes" Jan 29 08:34:24 crc kubenswrapper[5017]: I0129 08:34:24.325187 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:34:24 crc kubenswrapper[5017]: E0129 08:34:24.326499 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:34:39 crc kubenswrapper[5017]: I0129 08:34:39.316403 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:34:39 crc kubenswrapper[5017]: E0129 08:34:39.317585 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:34:50 crc kubenswrapper[5017]: I0129 08:34:50.316786 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:34:50 crc kubenswrapper[5017]: E0129 08:34:50.317715 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:35:01 crc kubenswrapper[5017]: I0129 08:35:01.316862 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:35:01 crc kubenswrapper[5017]: E0129 08:35:01.318040 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:35:12 crc kubenswrapper[5017]: I0129 08:35:12.316424 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:35:12 crc kubenswrapper[5017]: E0129 08:35:12.318532 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:35:27 crc kubenswrapper[5017]: I0129 08:35:27.316280 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:35:27 crc kubenswrapper[5017]: E0129 08:35:27.317336 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:35:30 crc kubenswrapper[5017]: I0129 08:35:30.234015 5017 generic.go:334] "Generic (PLEG): container finished" podID="fa9d8b30-7463-4b9e-8d40-87b3091a5869" containerID="97b835dbd6fbc1ad9e3abd19f5e37be509e81332e2e065b2ecf49dd6e52c427f" exitCode=0 Jan 29 08:35:30 crc kubenswrapper[5017]: I0129 08:35:30.234128 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" event={"ID":"fa9d8b30-7463-4b9e-8d40-87b3091a5869","Type":"ContainerDied","Data":"97b835dbd6fbc1ad9e3abd19f5e37be509e81332e2e065b2ecf49dd6e52c427f"} Jan 29 08:35:31 crc kubenswrapper[5017]: I0129 08:35:31.810196 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:35:31 crc kubenswrapper[5017]: I0129 08:35:31.924177 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ssh-key-openstack-cell1\") pod \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " Jan 29 08:35:31 crc kubenswrapper[5017]: I0129 08:35:31.924814 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ceph\") pod \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " Jan 29 08:35:31 crc kubenswrapper[5017]: I0129 08:35:31.924988 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-inventory\") pod \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " Jan 29 08:35:31 crc kubenswrapper[5017]: I0129 08:35:31.925128 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tv2c\" (UniqueName: \"kubernetes.io/projected/fa9d8b30-7463-4b9e-8d40-87b3091a5869-kube-api-access-4tv2c\") pod \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\" (UID: \"fa9d8b30-7463-4b9e-8d40-87b3091a5869\") " Jan 29 08:35:31 crc kubenswrapper[5017]: I0129 08:35:31.931722 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ceph" (OuterVolumeSpecName: "ceph") pod "fa9d8b30-7463-4b9e-8d40-87b3091a5869" (UID: "fa9d8b30-7463-4b9e-8d40-87b3091a5869"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:35:31 crc kubenswrapper[5017]: I0129 08:35:31.932703 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9d8b30-7463-4b9e-8d40-87b3091a5869-kube-api-access-4tv2c" (OuterVolumeSpecName: "kube-api-access-4tv2c") pod "fa9d8b30-7463-4b9e-8d40-87b3091a5869" (UID: "fa9d8b30-7463-4b9e-8d40-87b3091a5869"). InnerVolumeSpecName "kube-api-access-4tv2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:35:31 crc kubenswrapper[5017]: I0129 08:35:31.959009 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-inventory" (OuterVolumeSpecName: "inventory") pod "fa9d8b30-7463-4b9e-8d40-87b3091a5869" (UID: "fa9d8b30-7463-4b9e-8d40-87b3091a5869"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:35:31 crc kubenswrapper[5017]: I0129 08:35:31.960029 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "fa9d8b30-7463-4b9e-8d40-87b3091a5869" (UID: "fa9d8b30-7463-4b9e-8d40-87b3091a5869"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.028239 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.028275 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.028319 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tv2c\" (UniqueName: \"kubernetes.io/projected/fa9d8b30-7463-4b9e-8d40-87b3091a5869-kube-api-access-4tv2c\") on node \"crc\" DevicePath \"\"" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.028329 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fa9d8b30-7463-4b9e-8d40-87b3091a5869-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.254996 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" event={"ID":"fa9d8b30-7463-4b9e-8d40-87b3091a5869","Type":"ContainerDied","Data":"8a49e254d2f7011a499c6f0d4f68c671c2cc658924c3ab56d9f1fc559085943c"} Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.255343 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a49e254d2f7011a499c6f0d4f68c671c2cc658924c3ab56d9f1fc559085943c" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.255037 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-kp2gq" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.375885 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-xt4kz"] Jan 29 08:35:32 crc kubenswrapper[5017]: E0129 08:35:32.376498 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerName="extract-content" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.376522 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerName="extract-content" Jan 29 08:35:32 crc kubenswrapper[5017]: E0129 08:35:32.376551 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerName="registry-server" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.376558 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerName="registry-server" Jan 29 08:35:32 crc kubenswrapper[5017]: E0129 08:35:32.376570 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9d8b30-7463-4b9e-8d40-87b3091a5869" containerName="configure-network-openstack-openstack-cell1" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.376580 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9d8b30-7463-4b9e-8d40-87b3091a5869" containerName="configure-network-openstack-openstack-cell1" Jan 29 08:35:32 crc kubenswrapper[5017]: E0129 08:35:32.376595 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerName="extract-utilities" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.376601 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerName="extract-utilities" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.376844 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b7af55-32cc-4352-b951-4d6589e83b87" containerName="registry-server" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.376881 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9d8b30-7463-4b9e-8d40-87b3091a5869" containerName="configure-network-openstack-openstack-cell1" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.378028 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.380559 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.381180 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.382271 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.392341 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.404126 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-xt4kz"] Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.446425 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ceph\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.446488 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtq6l\" (UniqueName: \"kubernetes.io/projected/81bdc4f3-baae-455e-83e0-3dc111b608d2-kube-api-access-jtq6l\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.446645 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-inventory\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.446691 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.549939 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ceph\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.550054 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtq6l\" (UniqueName: \"kubernetes.io/projected/81bdc4f3-baae-455e-83e0-3dc111b608d2-kube-api-access-jtq6l\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.550160 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-inventory\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.550210 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.554763 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-inventory\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.554865 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ceph\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.555136 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.570565 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtq6l\" (UniqueName: \"kubernetes.io/projected/81bdc4f3-baae-455e-83e0-3dc111b608d2-kube-api-access-jtq6l\") pod \"validate-network-openstack-openstack-cell1-xt4kz\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:32 crc kubenswrapper[5017]: I0129 08:35:32.708903 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:33 crc kubenswrapper[5017]: I0129 08:35:33.258578 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-xt4kz"] Jan 29 08:35:34 crc kubenswrapper[5017]: I0129 08:35:34.279911 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" event={"ID":"81bdc4f3-baae-455e-83e0-3dc111b608d2","Type":"ContainerStarted","Data":"591c9df8e28c921e625ac99dde8638a02fa6ed89179023399a7f78bfc3cbbaa6"} Jan 29 08:35:34 crc kubenswrapper[5017]: I0129 08:35:34.280516 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" event={"ID":"81bdc4f3-baae-455e-83e0-3dc111b608d2","Type":"ContainerStarted","Data":"260a71fda1947627f5a0022192796ed668ce0f3b942a9889ccd0605302f3c1a8"} Jan 29 08:35:34 crc kubenswrapper[5017]: I0129 08:35:34.302743 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" podStartSLOduration=1.598011186 podStartE2EDuration="2.302719403s" podCreationTimestamp="2026-01-29 08:35:32 +0000 UTC" firstStartedPulling="2026-01-29 08:35:33.263814252 +0000 UTC m=+7219.638261862" lastFinishedPulling="2026-01-29 08:35:33.968522469 +0000 UTC m=+7220.342970079" observedRunningTime="2026-01-29 08:35:34.297657311 +0000 UTC m=+7220.672104921" watchObservedRunningTime="2026-01-29 08:35:34.302719403 +0000 UTC m=+7220.677167013" Jan 29 08:35:41 crc kubenswrapper[5017]: I0129 08:35:41.346792 5017 generic.go:334] "Generic (PLEG): container finished" podID="81bdc4f3-baae-455e-83e0-3dc111b608d2" containerID="591c9df8e28c921e625ac99dde8638a02fa6ed89179023399a7f78bfc3cbbaa6" exitCode=0 Jan 29 08:35:41 crc kubenswrapper[5017]: I0129 08:35:41.346889 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" event={"ID":"81bdc4f3-baae-455e-83e0-3dc111b608d2","Type":"ContainerDied","Data":"591c9df8e28c921e625ac99dde8638a02fa6ed89179023399a7f78bfc3cbbaa6"} Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.317148 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:35:42 crc kubenswrapper[5017]: E0129 08:35:42.317649 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.807726 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.903254 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ceph\") pod \"81bdc4f3-baae-455e-83e0-3dc111b608d2\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.903755 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ssh-key-openstack-cell1\") pod \"81bdc4f3-baae-455e-83e0-3dc111b608d2\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.906017 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-inventory\") pod \"81bdc4f3-baae-455e-83e0-3dc111b608d2\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.906420 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtq6l\" (UniqueName: \"kubernetes.io/projected/81bdc4f3-baae-455e-83e0-3dc111b608d2-kube-api-access-jtq6l\") pod \"81bdc4f3-baae-455e-83e0-3dc111b608d2\" (UID: \"81bdc4f3-baae-455e-83e0-3dc111b608d2\") " Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.911341 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ceph" (OuterVolumeSpecName: "ceph") pod "81bdc4f3-baae-455e-83e0-3dc111b608d2" (UID: "81bdc4f3-baae-455e-83e0-3dc111b608d2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.911907 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bdc4f3-baae-455e-83e0-3dc111b608d2-kube-api-access-jtq6l" (OuterVolumeSpecName: "kube-api-access-jtq6l") pod "81bdc4f3-baae-455e-83e0-3dc111b608d2" (UID: "81bdc4f3-baae-455e-83e0-3dc111b608d2"). InnerVolumeSpecName "kube-api-access-jtq6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.938903 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-inventory" (OuterVolumeSpecName: "inventory") pod "81bdc4f3-baae-455e-83e0-3dc111b608d2" (UID: "81bdc4f3-baae-455e-83e0-3dc111b608d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:35:42 crc kubenswrapper[5017]: I0129 08:35:42.951359 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "81bdc4f3-baae-455e-83e0-3dc111b608d2" (UID: "81bdc4f3-baae-455e-83e0-3dc111b608d2"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.010578 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.010626 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtq6l\" (UniqueName: \"kubernetes.io/projected/81bdc4f3-baae-455e-83e0-3dc111b608d2-kube-api-access-jtq6l\") on node \"crc\" DevicePath \"\"" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.010641 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.010653 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/81bdc4f3-baae-455e-83e0-3dc111b608d2-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.368000 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" event={"ID":"81bdc4f3-baae-455e-83e0-3dc111b608d2","Type":"ContainerDied","Data":"260a71fda1947627f5a0022192796ed668ce0f3b942a9889ccd0605302f3c1a8"} Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.368049 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-xt4kz" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.368059 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="260a71fda1947627f5a0022192796ed668ce0f3b942a9889ccd0605302f3c1a8" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.438075 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-jnlzx"] Jan 29 08:35:43 crc kubenswrapper[5017]: E0129 08:35:43.438623 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bdc4f3-baae-455e-83e0-3dc111b608d2" containerName="validate-network-openstack-openstack-cell1" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.438644 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bdc4f3-baae-455e-83e0-3dc111b608d2" containerName="validate-network-openstack-openstack-cell1" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.438866 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bdc4f3-baae-455e-83e0-3dc111b608d2" containerName="validate-network-openstack-openstack-cell1" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.439738 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.443140 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.443383 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.443447 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.454680 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.463104 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-jnlzx"] Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.522196 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-inventory\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.522274 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ceph\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.522319 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr46g\" (UniqueName: \"kubernetes.io/projected/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-kube-api-access-cr46g\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.522434 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.625389 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-inventory\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.625475 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ceph\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.625510 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr46g\" (UniqueName: \"kubernetes.io/projected/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-kube-api-access-cr46g\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.625578 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.631828 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.642010 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ceph\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.646838 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr46g\" (UniqueName: \"kubernetes.io/projected/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-kube-api-access-cr46g\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.648017 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-inventory\") pod \"install-os-openstack-openstack-cell1-jnlzx\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:43 crc kubenswrapper[5017]: I0129 08:35:43.764316 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:35:44 crc kubenswrapper[5017]: I0129 08:35:44.355253 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-jnlzx"] Jan 29 08:35:44 crc kubenswrapper[5017]: I0129 08:35:44.380629 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-jnlzx" event={"ID":"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7","Type":"ContainerStarted","Data":"4e202ff20cd4aa4ad3841781c3ab303f6676784bbc12cbd5506b039bbda0ef24"} Jan 29 08:35:46 crc kubenswrapper[5017]: I0129 08:35:46.411775 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-jnlzx" event={"ID":"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7","Type":"ContainerStarted","Data":"1d5b317e278a73a1acebe1ac70ce801aaab05f5dac3074b67a831f0b44cb2638"} Jan 29 08:35:46 crc kubenswrapper[5017]: I0129 08:35:46.463881 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-jnlzx" podStartSLOduration=2.641409221 podStartE2EDuration="3.463857041s" podCreationTimestamp="2026-01-29 08:35:43 +0000 UTC" firstStartedPulling="2026-01-29 08:35:44.351250959 +0000 UTC m=+7230.725698569" lastFinishedPulling="2026-01-29 08:35:45.173698769 +0000 UTC m=+7231.548146389" observedRunningTime="2026-01-29 08:35:46.459608918 +0000 UTC m=+7232.834056528" watchObservedRunningTime="2026-01-29 08:35:46.463857041 +0000 UTC m=+7232.838304651" Jan 29 08:35:57 crc kubenswrapper[5017]: I0129 08:35:57.316213 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:35:57 crc kubenswrapper[5017]: E0129 08:35:57.317211 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:36:08 crc kubenswrapper[5017]: I0129 08:36:08.316120 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:36:08 crc kubenswrapper[5017]: E0129 08:36:08.317265 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:36:20 crc kubenswrapper[5017]: I0129 08:36:20.317158 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:36:20 crc kubenswrapper[5017]: E0129 08:36:20.318464 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:36:32 crc kubenswrapper[5017]: I0129 08:36:32.317267 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:36:32 crc kubenswrapper[5017]: E0129 08:36:32.318378 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:36:32 crc kubenswrapper[5017]: I0129 08:36:32.974811 5017 generic.go:334] "Generic (PLEG): container finished" podID="3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7" containerID="1d5b317e278a73a1acebe1ac70ce801aaab05f5dac3074b67a831f0b44cb2638" exitCode=0 Jan 29 08:36:32 crc kubenswrapper[5017]: I0129 08:36:32.974895 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-jnlzx" event={"ID":"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7","Type":"ContainerDied","Data":"1d5b317e278a73a1acebe1ac70ce801aaab05f5dac3074b67a831f0b44cb2638"} Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.476774 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.538332 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ceph\") pod \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.538477 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ssh-key-openstack-cell1\") pod \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.538550 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-inventory\") pod \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.538611 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr46g\" (UniqueName: \"kubernetes.io/projected/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-kube-api-access-cr46g\") pod \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\" (UID: \"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7\") " Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.546018 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ceph" (OuterVolumeSpecName: "ceph") pod "3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7" (UID: "3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.546217 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-kube-api-access-cr46g" (OuterVolumeSpecName: "kube-api-access-cr46g") pod "3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7" (UID: "3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7"). InnerVolumeSpecName "kube-api-access-cr46g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.572203 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7" (UID: "3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.581146 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-inventory" (OuterVolumeSpecName: "inventory") pod "3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7" (UID: "3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.643008 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.643055 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.643072 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:36:34 crc kubenswrapper[5017]: I0129 08:36:34.643086 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr46g\" (UniqueName: \"kubernetes.io/projected/3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7-kube-api-access-cr46g\") on node \"crc\" DevicePath \"\"" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.001088 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-jnlzx" event={"ID":"3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7","Type":"ContainerDied","Data":"4e202ff20cd4aa4ad3841781c3ab303f6676784bbc12cbd5506b039bbda0ef24"} Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.001148 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e202ff20cd4aa4ad3841781c3ab303f6676784bbc12cbd5506b039bbda0ef24" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.001153 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-jnlzx" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.096127 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-wws2f"] Jan 29 08:36:35 crc kubenswrapper[5017]: E0129 08:36:35.096721 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7" containerName="install-os-openstack-openstack-cell1" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.096745 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7" containerName="install-os-openstack-openstack-cell1" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.097065 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7" containerName="install-os-openstack-openstack-cell1" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.097997 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.101367 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.101473 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.102293 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.104374 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.108918 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-wws2f"] Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.154552 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqvzg\" (UniqueName: \"kubernetes.io/projected/abf45f94-a2ef-418f-a292-edee247b11c2-kube-api-access-mqvzg\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.154797 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-inventory\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.155271 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ceph\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.155338 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.257251 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ceph\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.257321 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.257433 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqvzg\" (UniqueName: \"kubernetes.io/projected/abf45f94-a2ef-418f-a292-edee247b11c2-kube-api-access-mqvzg\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.257546 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-inventory\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.261591 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.261627 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ceph\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.262312 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-inventory\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.280271 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqvzg\" (UniqueName: \"kubernetes.io/projected/abf45f94-a2ef-418f-a292-edee247b11c2-kube-api-access-mqvzg\") pod \"configure-os-openstack-openstack-cell1-wws2f\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:35 crc kubenswrapper[5017]: I0129 08:36:35.417317 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:36:36 crc kubenswrapper[5017]: I0129 08:36:36.017875 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-wws2f"] Jan 29 08:36:37 crc kubenswrapper[5017]: I0129 08:36:37.023845 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-wws2f" event={"ID":"abf45f94-a2ef-418f-a292-edee247b11c2","Type":"ContainerStarted","Data":"26370816c14db6f990ebb4ee23423412491402715e3a6d64652a5cde64c166cd"} Jan 29 08:36:37 crc kubenswrapper[5017]: I0129 08:36:37.024732 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-wws2f" event={"ID":"abf45f94-a2ef-418f-a292-edee247b11c2","Type":"ContainerStarted","Data":"7ce4218b1d5b35d79202f5aa9bc523cc11d9ce26f8f56cf76efb5feb1a11c04f"} Jan 29 08:36:37 crc kubenswrapper[5017]: I0129 08:36:37.042363 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-wws2f" podStartSLOduration=1.359642144 podStartE2EDuration="2.042336254s" podCreationTimestamp="2026-01-29 08:36:35 +0000 UTC" firstStartedPulling="2026-01-29 08:36:36.02793637 +0000 UTC m=+7282.402383980" lastFinishedPulling="2026-01-29 08:36:36.71063048 +0000 UTC m=+7283.085078090" observedRunningTime="2026-01-29 08:36:37.041807892 +0000 UTC m=+7283.416255512" watchObservedRunningTime="2026-01-29 08:36:37.042336254 +0000 UTC m=+7283.416783864" Jan 29 08:36:44 crc kubenswrapper[5017]: I0129 08:36:44.323587 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:36:44 crc kubenswrapper[5017]: E0129 08:36:44.325021 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:36:55 crc kubenswrapper[5017]: I0129 08:36:55.317178 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:36:55 crc kubenswrapper[5017]: E0129 08:36:55.318535 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:37:08 crc kubenswrapper[5017]: I0129 08:37:08.316546 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:37:08 crc kubenswrapper[5017]: E0129 08:37:08.317819 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:37:20 crc kubenswrapper[5017]: I0129 08:37:20.316886 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:37:20 crc kubenswrapper[5017]: E0129 08:37:20.317867 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:37:22 crc kubenswrapper[5017]: I0129 08:37:22.474304 5017 generic.go:334] "Generic (PLEG): container finished" podID="abf45f94-a2ef-418f-a292-edee247b11c2" containerID="26370816c14db6f990ebb4ee23423412491402715e3a6d64652a5cde64c166cd" exitCode=0 Jan 29 08:37:22 crc kubenswrapper[5017]: I0129 08:37:22.474403 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-wws2f" event={"ID":"abf45f94-a2ef-418f-a292-edee247b11c2","Type":"ContainerDied","Data":"26370816c14db6f990ebb4ee23423412491402715e3a6d64652a5cde64c166cd"} Jan 29 08:37:23 crc kubenswrapper[5017]: I0129 08:37:23.957017 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.137849 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqvzg\" (UniqueName: \"kubernetes.io/projected/abf45f94-a2ef-418f-a292-edee247b11c2-kube-api-access-mqvzg\") pod \"abf45f94-a2ef-418f-a292-edee247b11c2\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.138438 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ceph\") pod \"abf45f94-a2ef-418f-a292-edee247b11c2\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.138521 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-inventory\") pod \"abf45f94-a2ef-418f-a292-edee247b11c2\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.138558 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ssh-key-openstack-cell1\") pod \"abf45f94-a2ef-418f-a292-edee247b11c2\" (UID: \"abf45f94-a2ef-418f-a292-edee247b11c2\") " Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.145359 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ceph" (OuterVolumeSpecName: "ceph") pod "abf45f94-a2ef-418f-a292-edee247b11c2" (UID: "abf45f94-a2ef-418f-a292-edee247b11c2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.145525 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf45f94-a2ef-418f-a292-edee247b11c2-kube-api-access-mqvzg" (OuterVolumeSpecName: "kube-api-access-mqvzg") pod "abf45f94-a2ef-418f-a292-edee247b11c2" (UID: "abf45f94-a2ef-418f-a292-edee247b11c2"). InnerVolumeSpecName "kube-api-access-mqvzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.171070 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-inventory" (OuterVolumeSpecName: "inventory") pod "abf45f94-a2ef-418f-a292-edee247b11c2" (UID: "abf45f94-a2ef-418f-a292-edee247b11c2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.171737 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "abf45f94-a2ef-418f-a292-edee247b11c2" (UID: "abf45f94-a2ef-418f-a292-edee247b11c2"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.241425 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqvzg\" (UniqueName: \"kubernetes.io/projected/abf45f94-a2ef-418f-a292-edee247b11c2-kube-api-access-mqvzg\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.241589 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.241673 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.241775 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/abf45f94-a2ef-418f-a292-edee247b11c2-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.495334 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-wws2f" event={"ID":"abf45f94-a2ef-418f-a292-edee247b11c2","Type":"ContainerDied","Data":"7ce4218b1d5b35d79202f5aa9bc523cc11d9ce26f8f56cf76efb5feb1a11c04f"} Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.495386 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ce4218b1d5b35d79202f5aa9bc523cc11d9ce26f8f56cf76efb5feb1a11c04f" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.495405 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-wws2f" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.651394 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-9fpln"] Jan 29 08:37:24 crc kubenswrapper[5017]: E0129 08:37:24.652105 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf45f94-a2ef-418f-a292-edee247b11c2" containerName="configure-os-openstack-openstack-cell1" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.652133 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf45f94-a2ef-418f-a292-edee247b11c2" containerName="configure-os-openstack-openstack-cell1" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.652467 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf45f94-a2ef-418f-a292-edee247b11c2" containerName="configure-os-openstack-openstack-cell1" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.653661 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.656598 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.656918 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.657685 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.657866 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.664598 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-9fpln"] Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.856016 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-inventory-0\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.856078 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lth8k\" (UniqueName: \"kubernetes.io/projected/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-kube-api-access-lth8k\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.856126 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.856596 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ceph\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.960359 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ceph\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.960702 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-inventory-0\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.960785 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lth8k\" (UniqueName: \"kubernetes.io/projected/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-kube-api-access-lth8k\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.960934 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.966087 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.966143 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ceph\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.966544 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-inventory-0\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:24 crc kubenswrapper[5017]: I0129 08:37:24.987363 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lth8k\" (UniqueName: \"kubernetes.io/projected/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-kube-api-access-lth8k\") pod \"ssh-known-hosts-openstack-9fpln\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:25 crc kubenswrapper[5017]: I0129 08:37:25.275299 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:25 crc kubenswrapper[5017]: I0129 08:37:25.876008 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-9fpln"] Jan 29 08:37:26 crc kubenswrapper[5017]: I0129 08:37:26.530518 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-9fpln" event={"ID":"eef69ab1-52e5-4f13-84fe-f5cc49697fcb","Type":"ContainerStarted","Data":"b5604d089459a149c0713db7be3c8012e8fcb0dea053860d2480e0a6314efce4"} Jan 29 08:37:27 crc kubenswrapper[5017]: I0129 08:37:27.544427 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-9fpln" event={"ID":"eef69ab1-52e5-4f13-84fe-f5cc49697fcb","Type":"ContainerStarted","Data":"7f10c7a6fa04a5a89f198a6f16f6f894aefe01e2fdd2d3f98b2d5b6860527d12"} Jan 29 08:37:33 crc kubenswrapper[5017]: I0129 08:37:33.317148 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:37:33 crc kubenswrapper[5017]: E0129 08:37:33.317874 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:37:35 crc kubenswrapper[5017]: I0129 08:37:35.634251 5017 generic.go:334] "Generic (PLEG): container finished" podID="eef69ab1-52e5-4f13-84fe-f5cc49697fcb" containerID="7f10c7a6fa04a5a89f198a6f16f6f894aefe01e2fdd2d3f98b2d5b6860527d12" exitCode=0 Jan 29 08:37:35 crc kubenswrapper[5017]: I0129 08:37:35.634340 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-9fpln" event={"ID":"eef69ab1-52e5-4f13-84fe-f5cc49697fcb","Type":"ContainerDied","Data":"7f10c7a6fa04a5a89f198a6f16f6f894aefe01e2fdd2d3f98b2d5b6860527d12"} Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.121733 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.251447 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lth8k\" (UniqueName: \"kubernetes.io/projected/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-kube-api-access-lth8k\") pod \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.252064 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ssh-key-openstack-cell1\") pod \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.252196 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ceph\") pod \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.252234 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-inventory-0\") pod \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\" (UID: \"eef69ab1-52e5-4f13-84fe-f5cc49697fcb\") " Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.258412 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ceph" (OuterVolumeSpecName: "ceph") pod "eef69ab1-52e5-4f13-84fe-f5cc49697fcb" (UID: "eef69ab1-52e5-4f13-84fe-f5cc49697fcb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.258540 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-kube-api-access-lth8k" (OuterVolumeSpecName: "kube-api-access-lth8k") pod "eef69ab1-52e5-4f13-84fe-f5cc49697fcb" (UID: "eef69ab1-52e5-4f13-84fe-f5cc49697fcb"). InnerVolumeSpecName "kube-api-access-lth8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.284436 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "eef69ab1-52e5-4f13-84fe-f5cc49697fcb" (UID: "eef69ab1-52e5-4f13-84fe-f5cc49697fcb"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.291306 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "eef69ab1-52e5-4f13-84fe-f5cc49697fcb" (UID: "eef69ab1-52e5-4f13-84fe-f5cc49697fcb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.355146 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lth8k\" (UniqueName: \"kubernetes.io/projected/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-kube-api-access-lth8k\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.355193 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.355207 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.355218 5017 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eef69ab1-52e5-4f13-84fe-f5cc49697fcb-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.655252 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-9fpln" event={"ID":"eef69ab1-52e5-4f13-84fe-f5cc49697fcb","Type":"ContainerDied","Data":"b5604d089459a149c0713db7be3c8012e8fcb0dea053860d2480e0a6314efce4"} Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.655330 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5604d089459a149c0713db7be3c8012e8fcb0dea053860d2480e0a6314efce4" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.655398 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-9fpln" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.737659 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-99nxk"] Jan 29 08:37:37 crc kubenswrapper[5017]: E0129 08:37:37.738173 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef69ab1-52e5-4f13-84fe-f5cc49697fcb" containerName="ssh-known-hosts-openstack" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.738191 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef69ab1-52e5-4f13-84fe-f5cc49697fcb" containerName="ssh-known-hosts-openstack" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.738379 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef69ab1-52e5-4f13-84fe-f5cc49697fcb" containerName="ssh-known-hosts-openstack" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.739152 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.742348 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.742774 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.743096 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.743239 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.778022 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-99nxk"] Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.866495 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ceph\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.867037 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.867170 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrdn\" (UniqueName: \"kubernetes.io/projected/a6637a8b-efe8-4fa4-995c-1d0023c627ce-kube-api-access-nwrdn\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.867210 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-inventory\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.968930 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrdn\" (UniqueName: \"kubernetes.io/projected/a6637a8b-efe8-4fa4-995c-1d0023c627ce-kube-api-access-nwrdn\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.969053 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-inventory\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.969297 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ceph\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.969335 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.992677 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.993579 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrdn\" (UniqueName: \"kubernetes.io/projected/a6637a8b-efe8-4fa4-995c-1d0023c627ce-kube-api-access-nwrdn\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.996406 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ceph\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:37 crc kubenswrapper[5017]: I0129 08:37:37.999362 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-inventory\") pod \"run-os-openstack-openstack-cell1-99nxk\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:38 crc kubenswrapper[5017]: I0129 08:37:38.059023 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:38 crc kubenswrapper[5017]: I0129 08:37:38.698919 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-99nxk"] Jan 29 08:37:39 crc kubenswrapper[5017]: I0129 08:37:39.676414 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-99nxk" event={"ID":"a6637a8b-efe8-4fa4-995c-1d0023c627ce","Type":"ContainerStarted","Data":"5724bccb9cf2c0e29f1cf0edab14f02831a217b831307567e166ed08f8974bcb"} Jan 29 08:37:40 crc kubenswrapper[5017]: I0129 08:37:40.689293 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-99nxk" event={"ID":"a6637a8b-efe8-4fa4-995c-1d0023c627ce","Type":"ContainerStarted","Data":"40a0c83f9d77a726c638a0090662e403a61cb021f78cf84e993dcba92407de16"} Jan 29 08:37:47 crc kubenswrapper[5017]: I0129 08:37:47.786112 5017 generic.go:334] "Generic (PLEG): container finished" podID="a6637a8b-efe8-4fa4-995c-1d0023c627ce" containerID="40a0c83f9d77a726c638a0090662e403a61cb021f78cf84e993dcba92407de16" exitCode=0 Jan 29 08:37:47 crc kubenswrapper[5017]: I0129 08:37:47.786207 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-99nxk" event={"ID":"a6637a8b-efe8-4fa4-995c-1d0023c627ce","Type":"ContainerDied","Data":"40a0c83f9d77a726c638a0090662e403a61cb021f78cf84e993dcba92407de16"} Jan 29 08:37:48 crc kubenswrapper[5017]: I0129 08:37:48.317572 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:37:48 crc kubenswrapper[5017]: E0129 08:37:48.318420 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.267090 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.351502 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-inventory\") pod \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.351702 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwrdn\" (UniqueName: \"kubernetes.io/projected/a6637a8b-efe8-4fa4-995c-1d0023c627ce-kube-api-access-nwrdn\") pod \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.351795 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ceph\") pod \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.351871 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ssh-key-openstack-cell1\") pod \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\" (UID: \"a6637a8b-efe8-4fa4-995c-1d0023c627ce\") " Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.358909 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ceph" (OuterVolumeSpecName: "ceph") pod "a6637a8b-efe8-4fa4-995c-1d0023c627ce" (UID: "a6637a8b-efe8-4fa4-995c-1d0023c627ce"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.359333 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6637a8b-efe8-4fa4-995c-1d0023c627ce-kube-api-access-nwrdn" (OuterVolumeSpecName: "kube-api-access-nwrdn") pod "a6637a8b-efe8-4fa4-995c-1d0023c627ce" (UID: "a6637a8b-efe8-4fa4-995c-1d0023c627ce"). InnerVolumeSpecName "kube-api-access-nwrdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.382480 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "a6637a8b-efe8-4fa4-995c-1d0023c627ce" (UID: "a6637a8b-efe8-4fa4-995c-1d0023c627ce"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.382834 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-inventory" (OuterVolumeSpecName: "inventory") pod "a6637a8b-efe8-4fa4-995c-1d0023c627ce" (UID: "a6637a8b-efe8-4fa4-995c-1d0023c627ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.455433 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.455476 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwrdn\" (UniqueName: \"kubernetes.io/projected/a6637a8b-efe8-4fa4-995c-1d0023c627ce-kube-api-access-nwrdn\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.455491 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.455504 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a6637a8b-efe8-4fa4-995c-1d0023c627ce-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.808032 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-99nxk" event={"ID":"a6637a8b-efe8-4fa4-995c-1d0023c627ce","Type":"ContainerDied","Data":"5724bccb9cf2c0e29f1cf0edab14f02831a217b831307567e166ed08f8974bcb"} Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.808092 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5724bccb9cf2c0e29f1cf0edab14f02831a217b831307567e166ed08f8974bcb" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.808176 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-99nxk" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.884736 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-r7qwl"] Jan 29 08:37:49 crc kubenswrapper[5017]: E0129 08:37:49.885260 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6637a8b-efe8-4fa4-995c-1d0023c627ce" containerName="run-os-openstack-openstack-cell1" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.885279 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6637a8b-efe8-4fa4-995c-1d0023c627ce" containerName="run-os-openstack-openstack-cell1" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.889860 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6637a8b-efe8-4fa4-995c-1d0023c627ce" containerName="run-os-openstack-openstack-cell1" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.890853 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.896404 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.897094 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.897218 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.897325 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.898775 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-r7qwl"] Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.967543 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqnnf\" (UniqueName: \"kubernetes.io/projected/2a087e53-8f2b-4f84-a483-80dab07ccfb9-kube-api-access-tqnnf\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.967664 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-inventory\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.967767 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ceph\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:49 crc kubenswrapper[5017]: I0129 08:37:49.967800 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.069440 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqnnf\" (UniqueName: \"kubernetes.io/projected/2a087e53-8f2b-4f84-a483-80dab07ccfb9-kube-api-access-tqnnf\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.069523 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-inventory\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.069648 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ceph\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.070507 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.075662 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-inventory\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.075667 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.087938 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ceph\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.092623 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqnnf\" (UniqueName: \"kubernetes.io/projected/2a087e53-8f2b-4f84-a483-80dab07ccfb9-kube-api-access-tqnnf\") pod \"reboot-os-openstack-openstack-cell1-r7qwl\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.213631 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.757920 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-r7qwl"] Jan 29 08:37:50 crc kubenswrapper[5017]: I0129 08:37:50.818709 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" event={"ID":"2a087e53-8f2b-4f84-a483-80dab07ccfb9","Type":"ContainerStarted","Data":"b29e1ebcd9b9840eef655a301ff72fdbaa68cc5fa0941d8c26828e1f5ad6db7d"} Jan 29 08:37:51 crc kubenswrapper[5017]: I0129 08:37:51.831864 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" event={"ID":"2a087e53-8f2b-4f84-a483-80dab07ccfb9","Type":"ContainerStarted","Data":"0621a47931873ca21a1819720b9a7524d0d639d95796775a687ff6bcbed0f4e8"} Jan 29 08:37:51 crc kubenswrapper[5017]: I0129 08:37:51.879769 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" podStartSLOduration=2.390543115 podStartE2EDuration="2.879736613s" podCreationTimestamp="2026-01-29 08:37:49 +0000 UTC" firstStartedPulling="2026-01-29 08:37:50.765664547 +0000 UTC m=+7357.140112157" lastFinishedPulling="2026-01-29 08:37:51.254858045 +0000 UTC m=+7357.629305655" observedRunningTime="2026-01-29 08:37:51.853559294 +0000 UTC m=+7358.228006934" watchObservedRunningTime="2026-01-29 08:37:51.879736613 +0000 UTC m=+7358.254184213" Jan 29 08:38:01 crc kubenswrapper[5017]: I0129 08:38:01.316192 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:38:01 crc kubenswrapper[5017]: I0129 08:38:01.984321 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"529e90843cc8236dc8610886df83447cc84a03878dfc7968f456b6e4cf77b3e9"} Jan 29 08:38:08 crc kubenswrapper[5017]: I0129 08:38:08.047054 5017 generic.go:334] "Generic (PLEG): container finished" podID="2a087e53-8f2b-4f84-a483-80dab07ccfb9" containerID="0621a47931873ca21a1819720b9a7524d0d639d95796775a687ff6bcbed0f4e8" exitCode=0 Jan 29 08:38:08 crc kubenswrapper[5017]: I0129 08:38:08.047858 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" event={"ID":"2a087e53-8f2b-4f84-a483-80dab07ccfb9","Type":"ContainerDied","Data":"0621a47931873ca21a1819720b9a7524d0d639d95796775a687ff6bcbed0f4e8"} Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.534231 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.680497 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ceph\") pod \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.680589 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-inventory\") pod \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.680788 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ssh-key-openstack-cell1\") pod \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.680820 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqnnf\" (UniqueName: \"kubernetes.io/projected/2a087e53-8f2b-4f84-a483-80dab07ccfb9-kube-api-access-tqnnf\") pod \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\" (UID: \"2a087e53-8f2b-4f84-a483-80dab07ccfb9\") " Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.691196 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ceph" (OuterVolumeSpecName: "ceph") pod "2a087e53-8f2b-4f84-a483-80dab07ccfb9" (UID: "2a087e53-8f2b-4f84-a483-80dab07ccfb9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.691320 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a087e53-8f2b-4f84-a483-80dab07ccfb9-kube-api-access-tqnnf" (OuterVolumeSpecName: "kube-api-access-tqnnf") pod "2a087e53-8f2b-4f84-a483-80dab07ccfb9" (UID: "2a087e53-8f2b-4f84-a483-80dab07ccfb9"). InnerVolumeSpecName "kube-api-access-tqnnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.714277 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-inventory" (OuterVolumeSpecName: "inventory") pod "2a087e53-8f2b-4f84-a483-80dab07ccfb9" (UID: "2a087e53-8f2b-4f84-a483-80dab07ccfb9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.718217 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2a087e53-8f2b-4f84-a483-80dab07ccfb9" (UID: "2a087e53-8f2b-4f84-a483-80dab07ccfb9"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.783649 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.783936 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqnnf\" (UniqueName: \"kubernetes.io/projected/2a087e53-8f2b-4f84-a483-80dab07ccfb9-kube-api-access-tqnnf\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.784090 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:09 crc kubenswrapper[5017]: I0129 08:38:09.784189 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a087e53-8f2b-4f84-a483-80dab07ccfb9-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.067359 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" event={"ID":"2a087e53-8f2b-4f84-a483-80dab07ccfb9","Type":"ContainerDied","Data":"b29e1ebcd9b9840eef655a301ff72fdbaa68cc5fa0941d8c26828e1f5ad6db7d"} Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.067412 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b29e1ebcd9b9840eef655a301ff72fdbaa68cc5fa0941d8c26828e1f5ad6db7d" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.067478 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-r7qwl" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.166059 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-d4dqv"] Jan 29 08:38:10 crc kubenswrapper[5017]: E0129 08:38:10.166552 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a087e53-8f2b-4f84-a483-80dab07ccfb9" containerName="reboot-os-openstack-openstack-cell1" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.166573 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a087e53-8f2b-4f84-a483-80dab07ccfb9" containerName="reboot-os-openstack-openstack-cell1" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.166817 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a087e53-8f2b-4f84-a483-80dab07ccfb9" containerName="reboot-os-openstack-openstack-cell1" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.167683 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.175880 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.176190 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.176547 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.184335 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.194904 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-d4dqv"] Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.296809 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.296875 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.296944 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.297005 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.297047 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.297068 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4x2b\" (UniqueName: \"kubernetes.io/projected/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-kube-api-access-j4x2b\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.297098 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ceph\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.297120 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.297147 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.297181 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.297236 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.297299 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-inventory\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.399670 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.399769 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.399827 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.399871 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4x2b\" (UniqueName: \"kubernetes.io/projected/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-kube-api-access-j4x2b\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.399911 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ceph\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.399931 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.399979 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.400025 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.400045 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.400081 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-inventory\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.400108 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.400137 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.404600 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ceph\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.405601 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.405795 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.406153 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.406308 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-inventory\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.406500 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.406774 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.407339 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.407543 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.408129 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.408305 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.420542 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4x2b\" (UniqueName: \"kubernetes.io/projected/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-kube-api-access-j4x2b\") pod \"install-certs-openstack-openstack-cell1-d4dqv\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:10 crc kubenswrapper[5017]: I0129 08:38:10.552057 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:11 crc kubenswrapper[5017]: I0129 08:38:11.158327 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-d4dqv"] Jan 29 08:38:11 crc kubenswrapper[5017]: W0129 08:38:11.162377 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59dc6dd8_36e1_4d52_84fd_be50d1e1b398.slice/crio-e50066bd19ad78b45b333a842631c5bd068371d6605a0b03ab5db9f4ba3eae3d WatchSource:0}: Error finding container e50066bd19ad78b45b333a842631c5bd068371d6605a0b03ab5db9f4ba3eae3d: Status 404 returned error can't find the container with id e50066bd19ad78b45b333a842631c5bd068371d6605a0b03ab5db9f4ba3eae3d Jan 29 08:38:12 crc kubenswrapper[5017]: I0129 08:38:12.087667 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" event={"ID":"59dc6dd8-36e1-4d52-84fd-be50d1e1b398","Type":"ContainerStarted","Data":"de293a1ee319f1802b37c584e8a618e5a18805caab19fe76bc0248b3411746e4"} Jan 29 08:38:12 crc kubenswrapper[5017]: I0129 08:38:12.088526 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" event={"ID":"59dc6dd8-36e1-4d52-84fd-be50d1e1b398","Type":"ContainerStarted","Data":"e50066bd19ad78b45b333a842631c5bd068371d6605a0b03ab5db9f4ba3eae3d"} Jan 29 08:38:12 crc kubenswrapper[5017]: I0129 08:38:12.114936 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" podStartSLOduration=1.629426755 podStartE2EDuration="2.114906122s" podCreationTimestamp="2026-01-29 08:38:10 +0000 UTC" firstStartedPulling="2026-01-29 08:38:11.164452029 +0000 UTC m=+7377.538899639" lastFinishedPulling="2026-01-29 08:38:11.649931406 +0000 UTC m=+7378.024379006" observedRunningTime="2026-01-29 08:38:12.105749929 +0000 UTC m=+7378.480197539" watchObservedRunningTime="2026-01-29 08:38:12.114906122 +0000 UTC m=+7378.489353732" Jan 29 08:38:30 crc kubenswrapper[5017]: I0129 08:38:30.283563 5017 generic.go:334] "Generic (PLEG): container finished" podID="59dc6dd8-36e1-4d52-84fd-be50d1e1b398" containerID="de293a1ee319f1802b37c584e8a618e5a18805caab19fe76bc0248b3411746e4" exitCode=0 Jan 29 08:38:30 crc kubenswrapper[5017]: I0129 08:38:30.283636 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" event={"ID":"59dc6dd8-36e1-4d52-84fd-be50d1e1b398","Type":"ContainerDied","Data":"de293a1ee319f1802b37c584e8a618e5a18805caab19fe76bc0248b3411746e4"} Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.763458 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841130 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ssh-key-openstack-cell1\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841177 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ovn-combined-ca-bundle\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841262 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-nova-combined-ca-bundle\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841313 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-bootstrap-combined-ca-bundle\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841384 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4x2b\" (UniqueName: \"kubernetes.io/projected/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-kube-api-access-j4x2b\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841402 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-telemetry-combined-ca-bundle\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841528 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-libvirt-combined-ca-bundle\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841589 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-dhcp-combined-ca-bundle\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841655 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-inventory\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841680 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-metadata-combined-ca-bundle\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841707 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ceph\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.841735 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-sriov-combined-ca-bundle\") pod \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\" (UID: \"59dc6dd8-36e1-4d52-84fd-be50d1e1b398\") " Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.849843 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.849858 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-kube-api-access-j4x2b" (OuterVolumeSpecName: "kube-api-access-j4x2b") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "kube-api-access-j4x2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.849887 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.849980 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.850555 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.850841 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.851867 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.852039 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ceph" (OuterVolumeSpecName: "ceph") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.853089 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.854283 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.874223 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.876493 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-inventory" (OuterVolumeSpecName: "inventory") pod "59dc6dd8-36e1-4d52-84fd-be50d1e1b398" (UID: "59dc6dd8-36e1-4d52-84fd-be50d1e1b398"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944463 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944724 5017 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944745 5017 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944759 5017 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944774 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4x2b\" (UniqueName: \"kubernetes.io/projected/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-kube-api-access-j4x2b\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944789 5017 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944801 5017 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944816 5017 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944829 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944842 5017 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944853 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:31 crc kubenswrapper[5017]: I0129 08:38:31.944864 5017 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dc6dd8-36e1-4d52-84fd-be50d1e1b398-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.306059 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" event={"ID":"59dc6dd8-36e1-4d52-84fd-be50d1e1b398","Type":"ContainerDied","Data":"e50066bd19ad78b45b333a842631c5bd068371d6605a0b03ab5db9f4ba3eae3d"} Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.306120 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e50066bd19ad78b45b333a842631c5bd068371d6605a0b03ab5db9f4ba3eae3d" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.306150 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-d4dqv" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.410261 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-g4vhf"] Jan 29 08:38:32 crc kubenswrapper[5017]: E0129 08:38:32.410750 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59dc6dd8-36e1-4d52-84fd-be50d1e1b398" containerName="install-certs-openstack-openstack-cell1" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.410768 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="59dc6dd8-36e1-4d52-84fd-be50d1e1b398" containerName="install-certs-openstack-openstack-cell1" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.411050 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="59dc6dd8-36e1-4d52-84fd-be50d1e1b398" containerName="install-certs-openstack-openstack-cell1" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.411819 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.413598 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.413858 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.414402 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.419828 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.426826 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-g4vhf"] Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.563371 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.563529 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-inventory\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.563593 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4hp\" (UniqueName: \"kubernetes.io/projected/f6f1a78c-d351-4293-bb4c-2e89392cce92-kube-api-access-st4hp\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.563640 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ceph\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.666166 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ceph\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.666378 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.666647 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-inventory\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.666787 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st4hp\" (UniqueName: \"kubernetes.io/projected/f6f1a78c-d351-4293-bb4c-2e89392cce92-kube-api-access-st4hp\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.674501 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.679054 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-inventory\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.683738 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ceph\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.686497 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st4hp\" (UniqueName: \"kubernetes.io/projected/f6f1a78c-d351-4293-bb4c-2e89392cce92-kube-api-access-st4hp\") pod \"ceph-client-openstack-openstack-cell1-g4vhf\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:32 crc kubenswrapper[5017]: I0129 08:38:32.734025 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:33 crc kubenswrapper[5017]: I0129 08:38:33.267017 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-g4vhf"] Jan 29 08:38:33 crc kubenswrapper[5017]: I0129 08:38:33.318607 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" event={"ID":"f6f1a78c-d351-4293-bb4c-2e89392cce92","Type":"ContainerStarted","Data":"424a8dd5bb5dfd80bd63ac80038d3d2026dfb2657614d1cb2a8757890a39ef78"} Jan 29 08:38:34 crc kubenswrapper[5017]: I0129 08:38:34.334997 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" event={"ID":"f6f1a78c-d351-4293-bb4c-2e89392cce92","Type":"ContainerStarted","Data":"dd13b78582818168e2e7cdf9459ed92f5f503eb3830bc3953f4110fc2c71d8a0"} Jan 29 08:38:34 crc kubenswrapper[5017]: I0129 08:38:34.384206 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" podStartSLOduration=1.949736929 podStartE2EDuration="2.38417602s" podCreationTimestamp="2026-01-29 08:38:32 +0000 UTC" firstStartedPulling="2026-01-29 08:38:33.272948834 +0000 UTC m=+7399.647396444" lastFinishedPulling="2026-01-29 08:38:33.707387925 +0000 UTC m=+7400.081835535" observedRunningTime="2026-01-29 08:38:34.374758579 +0000 UTC m=+7400.749206209" watchObservedRunningTime="2026-01-29 08:38:34.38417602 +0000 UTC m=+7400.758623630" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.342626 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g6s57"] Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.347842 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.357461 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6s57"] Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.524459 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-catalog-content\") pod \"community-operators-g6s57\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.525181 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-utilities\") pod \"community-operators-g6s57\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.525230 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcsjp\" (UniqueName: \"kubernetes.io/projected/5363c6b8-2681-4be1-b024-583058bf95fe-kube-api-access-dcsjp\") pod \"community-operators-g6s57\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.627460 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-utilities\") pod \"community-operators-g6s57\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.627847 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcsjp\" (UniqueName: \"kubernetes.io/projected/5363c6b8-2681-4be1-b024-583058bf95fe-kube-api-access-dcsjp\") pod \"community-operators-g6s57\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.628516 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-catalog-content\") pod \"community-operators-g6s57\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.628253 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-utilities\") pod \"community-operators-g6s57\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.628976 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-catalog-content\") pod \"community-operators-g6s57\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.659348 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcsjp\" (UniqueName: \"kubernetes.io/projected/5363c6b8-2681-4be1-b024-583058bf95fe-kube-api-access-dcsjp\") pod \"community-operators-g6s57\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:38 crc kubenswrapper[5017]: I0129 08:38:38.676692 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:39 crc kubenswrapper[5017]: I0129 08:38:39.226412 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6s57"] Jan 29 08:38:39 crc kubenswrapper[5017]: I0129 08:38:39.394351 5017 generic.go:334] "Generic (PLEG): container finished" podID="f6f1a78c-d351-4293-bb4c-2e89392cce92" containerID="dd13b78582818168e2e7cdf9459ed92f5f503eb3830bc3953f4110fc2c71d8a0" exitCode=0 Jan 29 08:38:39 crc kubenswrapper[5017]: I0129 08:38:39.394439 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" event={"ID":"f6f1a78c-d351-4293-bb4c-2e89392cce92","Type":"ContainerDied","Data":"dd13b78582818168e2e7cdf9459ed92f5f503eb3830bc3953f4110fc2c71d8a0"} Jan 29 08:38:39 crc kubenswrapper[5017]: I0129 08:38:39.397105 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6s57" event={"ID":"5363c6b8-2681-4be1-b024-583058bf95fe","Type":"ContainerStarted","Data":"bd9b28ab714e905787d8eaccf40e002afdf86b04c394f7f9673fc8f30f211390"} Jan 29 08:38:40 crc kubenswrapper[5017]: I0129 08:38:40.410421 5017 generic.go:334] "Generic (PLEG): container finished" podID="5363c6b8-2681-4be1-b024-583058bf95fe" containerID="89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d" exitCode=0 Jan 29 08:38:40 crc kubenswrapper[5017]: I0129 08:38:40.411172 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6s57" event={"ID":"5363c6b8-2681-4be1-b024-583058bf95fe","Type":"ContainerDied","Data":"89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d"} Jan 29 08:38:40 crc kubenswrapper[5017]: I0129 08:38:40.869155 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:40 crc kubenswrapper[5017]: I0129 08:38:40.982570 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st4hp\" (UniqueName: \"kubernetes.io/projected/f6f1a78c-d351-4293-bb4c-2e89392cce92-kube-api-access-st4hp\") pod \"f6f1a78c-d351-4293-bb4c-2e89392cce92\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " Jan 29 08:38:40 crc kubenswrapper[5017]: I0129 08:38:40.982810 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-inventory\") pod \"f6f1a78c-d351-4293-bb4c-2e89392cce92\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " Jan 29 08:38:40 crc kubenswrapper[5017]: I0129 08:38:40.982916 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ssh-key-openstack-cell1\") pod \"f6f1a78c-d351-4293-bb4c-2e89392cce92\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " Jan 29 08:38:40 crc kubenswrapper[5017]: I0129 08:38:40.983054 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ceph\") pod \"f6f1a78c-d351-4293-bb4c-2e89392cce92\" (UID: \"f6f1a78c-d351-4293-bb4c-2e89392cce92\") " Jan 29 08:38:40 crc kubenswrapper[5017]: I0129 08:38:40.989720 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ceph" (OuterVolumeSpecName: "ceph") pod "f6f1a78c-d351-4293-bb4c-2e89392cce92" (UID: "f6f1a78c-d351-4293-bb4c-2e89392cce92"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:40 crc kubenswrapper[5017]: I0129 08:38:40.990052 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f1a78c-d351-4293-bb4c-2e89392cce92-kube-api-access-st4hp" (OuterVolumeSpecName: "kube-api-access-st4hp") pod "f6f1a78c-d351-4293-bb4c-2e89392cce92" (UID: "f6f1a78c-d351-4293-bb4c-2e89392cce92"). InnerVolumeSpecName "kube-api-access-st4hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.014864 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-inventory" (OuterVolumeSpecName: "inventory") pod "f6f1a78c-d351-4293-bb4c-2e89392cce92" (UID: "f6f1a78c-d351-4293-bb4c-2e89392cce92"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.015308 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "f6f1a78c-d351-4293-bb4c-2e89392cce92" (UID: "f6f1a78c-d351-4293-bb4c-2e89392cce92"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.085812 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st4hp\" (UniqueName: \"kubernetes.io/projected/f6f1a78c-d351-4293-bb4c-2e89392cce92-kube-api-access-st4hp\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.085856 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.085867 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.085879 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6f1a78c-d351-4293-bb4c-2e89392cce92-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.422870 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" event={"ID":"f6f1a78c-d351-4293-bb4c-2e89392cce92","Type":"ContainerDied","Data":"424a8dd5bb5dfd80bd63ac80038d3d2026dfb2657614d1cb2a8757890a39ef78"} Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.422925 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="424a8dd5bb5dfd80bd63ac80038d3d2026dfb2657614d1cb2a8757890a39ef78" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.423021 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-g4vhf" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.498941 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-k57sb"] Jan 29 08:38:41 crc kubenswrapper[5017]: E0129 08:38:41.499993 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f1a78c-d351-4293-bb4c-2e89392cce92" containerName="ceph-client-openstack-openstack-cell1" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.500029 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f1a78c-d351-4293-bb4c-2e89392cce92" containerName="ceph-client-openstack-openstack-cell1" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.500247 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f1a78c-d351-4293-bb4c-2e89392cce92" containerName="ceph-client-openstack-openstack-cell1" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.501228 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.503463 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.504595 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.504668 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.504788 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.504785 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.510556 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-k57sb"] Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.597077 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-inventory\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.597176 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86h2k\" (UniqueName: \"kubernetes.io/projected/d5e0be93-aeff-4f04-b902-e137a19e5585-kube-api-access-86h2k\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.597263 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.597300 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ceph\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.597382 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.597508 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d5e0be93-aeff-4f04-b902-e137a19e5585-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.700480 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d5e0be93-aeff-4f04-b902-e137a19e5585-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.700638 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-inventory\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.700690 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86h2k\" (UniqueName: \"kubernetes.io/projected/d5e0be93-aeff-4f04-b902-e137a19e5585-kube-api-access-86h2k\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.700785 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.700820 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ceph\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.700880 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.701758 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d5e0be93-aeff-4f04-b902-e137a19e5585-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.706567 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-inventory\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.707173 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.710163 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.710223 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ceph\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.718862 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86h2k\" (UniqueName: \"kubernetes.io/projected/d5e0be93-aeff-4f04-b902-e137a19e5585-kube-api-access-86h2k\") pod \"ovn-openstack-openstack-cell1-k57sb\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:41 crc kubenswrapper[5017]: I0129 08:38:41.824801 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:38:42 crc kubenswrapper[5017]: W0129 08:38:42.403196 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e0be93_aeff_4f04_b902_e137a19e5585.slice/crio-e7c1cd101cd1687f4a39f1c0d6339f1d55849d6611472da41b474b2b81749875 WatchSource:0}: Error finding container e7c1cd101cd1687f4a39f1c0d6339f1d55849d6611472da41b474b2b81749875: Status 404 returned error can't find the container with id e7c1cd101cd1687f4a39f1c0d6339f1d55849d6611472da41b474b2b81749875 Jan 29 08:38:42 crc kubenswrapper[5017]: I0129 08:38:42.404208 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-k57sb"] Jan 29 08:38:42 crc kubenswrapper[5017]: I0129 08:38:42.439533 5017 generic.go:334] "Generic (PLEG): container finished" podID="5363c6b8-2681-4be1-b024-583058bf95fe" containerID="caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951" exitCode=0 Jan 29 08:38:42 crc kubenswrapper[5017]: I0129 08:38:42.439611 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6s57" event={"ID":"5363c6b8-2681-4be1-b024-583058bf95fe","Type":"ContainerDied","Data":"caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951"} Jan 29 08:38:42 crc kubenswrapper[5017]: I0129 08:38:42.443830 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-k57sb" event={"ID":"d5e0be93-aeff-4f04-b902-e137a19e5585","Type":"ContainerStarted","Data":"e7c1cd101cd1687f4a39f1c0d6339f1d55849d6611472da41b474b2b81749875"} Jan 29 08:38:43 crc kubenswrapper[5017]: I0129 08:38:43.460820 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-k57sb" event={"ID":"d5e0be93-aeff-4f04-b902-e137a19e5585","Type":"ContainerStarted","Data":"f2efa5c5f277ccd309d90981efa8133449659c1c7ab1db5bb161675a9c3542ed"} Jan 29 08:38:43 crc kubenswrapper[5017]: I0129 08:38:43.466300 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6s57" event={"ID":"5363c6b8-2681-4be1-b024-583058bf95fe","Type":"ContainerStarted","Data":"6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3"} Jan 29 08:38:43 crc kubenswrapper[5017]: I0129 08:38:43.489198 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-k57sb" podStartSLOduration=1.9206239269999998 podStartE2EDuration="2.489171581s" podCreationTimestamp="2026-01-29 08:38:41 +0000 UTC" firstStartedPulling="2026-01-29 08:38:42.406494361 +0000 UTC m=+7408.780941971" lastFinishedPulling="2026-01-29 08:38:42.975042015 +0000 UTC m=+7409.349489625" observedRunningTime="2026-01-29 08:38:43.477014514 +0000 UTC m=+7409.851462124" watchObservedRunningTime="2026-01-29 08:38:43.489171581 +0000 UTC m=+7409.863619191" Jan 29 08:38:43 crc kubenswrapper[5017]: I0129 08:38:43.509980 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g6s57" podStartSLOduration=2.866154174 podStartE2EDuration="5.509927748s" podCreationTimestamp="2026-01-29 08:38:38 +0000 UTC" firstStartedPulling="2026-01-29 08:38:40.417164377 +0000 UTC m=+7406.791612007" lastFinishedPulling="2026-01-29 08:38:43.060937971 +0000 UTC m=+7409.435385581" observedRunningTime="2026-01-29 08:38:43.49940078 +0000 UTC m=+7409.873848390" watchObservedRunningTime="2026-01-29 08:38:43.509927748 +0000 UTC m=+7409.884375358" Jan 29 08:38:48 crc kubenswrapper[5017]: I0129 08:38:48.678005 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:48 crc kubenswrapper[5017]: I0129 08:38:48.678900 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:48 crc kubenswrapper[5017]: I0129 08:38:48.729940 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:49 crc kubenswrapper[5017]: I0129 08:38:49.574060 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:49 crc kubenswrapper[5017]: I0129 08:38:49.626524 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6s57"] Jan 29 08:38:51 crc kubenswrapper[5017]: I0129 08:38:51.539858 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g6s57" podUID="5363c6b8-2681-4be1-b024-583058bf95fe" containerName="registry-server" containerID="cri-o://6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3" gracePeriod=2 Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.054443 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.144832 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcsjp\" (UniqueName: \"kubernetes.io/projected/5363c6b8-2681-4be1-b024-583058bf95fe-kube-api-access-dcsjp\") pod \"5363c6b8-2681-4be1-b024-583058bf95fe\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.144989 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-catalog-content\") pod \"5363c6b8-2681-4be1-b024-583058bf95fe\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.145050 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-utilities\") pod \"5363c6b8-2681-4be1-b024-583058bf95fe\" (UID: \"5363c6b8-2681-4be1-b024-583058bf95fe\") " Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.146171 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-utilities" (OuterVolumeSpecName: "utilities") pod "5363c6b8-2681-4be1-b024-583058bf95fe" (UID: "5363c6b8-2681-4be1-b024-583058bf95fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.147117 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.152344 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5363c6b8-2681-4be1-b024-583058bf95fe-kube-api-access-dcsjp" (OuterVolumeSpecName: "kube-api-access-dcsjp") pod "5363c6b8-2681-4be1-b024-583058bf95fe" (UID: "5363c6b8-2681-4be1-b024-583058bf95fe"). InnerVolumeSpecName "kube-api-access-dcsjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.207843 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5363c6b8-2681-4be1-b024-583058bf95fe" (UID: "5363c6b8-2681-4be1-b024-583058bf95fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.249382 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcsjp\" (UniqueName: \"kubernetes.io/projected/5363c6b8-2681-4be1-b024-583058bf95fe-kube-api-access-dcsjp\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.249445 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5363c6b8-2681-4be1-b024-583058bf95fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.551499 5017 generic.go:334] "Generic (PLEG): container finished" podID="5363c6b8-2681-4be1-b024-583058bf95fe" containerID="6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3" exitCode=0 Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.551623 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6s57" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.551613 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6s57" event={"ID":"5363c6b8-2681-4be1-b024-583058bf95fe","Type":"ContainerDied","Data":"6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3"} Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.553442 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6s57" event={"ID":"5363c6b8-2681-4be1-b024-583058bf95fe","Type":"ContainerDied","Data":"bd9b28ab714e905787d8eaccf40e002afdf86b04c394f7f9673fc8f30f211390"} Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.553480 5017 scope.go:117] "RemoveContainer" containerID="6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.578839 5017 scope.go:117] "RemoveContainer" containerID="caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.590200 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6s57"] Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.603695 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g6s57"] Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.622940 5017 scope.go:117] "RemoveContainer" containerID="89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.659819 5017 scope.go:117] "RemoveContainer" containerID="6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3" Jan 29 08:38:52 crc kubenswrapper[5017]: E0129 08:38:52.660522 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3\": container with ID starting with 6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3 not found: ID does not exist" containerID="6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.660560 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3"} err="failed to get container status \"6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3\": rpc error: code = NotFound desc = could not find container \"6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3\": container with ID starting with 6447dc31affb2a2621f31d94249a7c8d48d04a1ac3175362eb85d739c89a4ca3 not found: ID does not exist" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.660588 5017 scope.go:117] "RemoveContainer" containerID="caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951" Jan 29 08:38:52 crc kubenswrapper[5017]: E0129 08:38:52.661462 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951\": container with ID starting with caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951 not found: ID does not exist" containerID="caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.661593 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951"} err="failed to get container status \"caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951\": rpc error: code = NotFound desc = could not find container \"caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951\": container with ID starting with caf940540c9ad5b8ecca2be08c88fada05266c5bb5a1b516e1ff6c028e244951 not found: ID does not exist" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.661685 5017 scope.go:117] "RemoveContainer" containerID="89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d" Jan 29 08:38:52 crc kubenswrapper[5017]: E0129 08:38:52.663195 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d\": container with ID starting with 89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d not found: ID does not exist" containerID="89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d" Jan 29 08:38:52 crc kubenswrapper[5017]: I0129 08:38:52.663264 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d"} err="failed to get container status \"89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d\": rpc error: code = NotFound desc = could not find container \"89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d\": container with ID starting with 89be9fc1103c645e047a33dbbe86a0dc457b58805ff67362e24f447cf8429d7d not found: ID does not exist" Jan 29 08:38:54 crc kubenswrapper[5017]: I0129 08:38:54.333115 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5363c6b8-2681-4be1-b024-583058bf95fe" path="/var/lib/kubelet/pods/5363c6b8-2681-4be1-b024-583058bf95fe/volumes" Jan 29 08:39:43 crc kubenswrapper[5017]: I0129 08:39:43.117755 5017 generic.go:334] "Generic (PLEG): container finished" podID="d5e0be93-aeff-4f04-b902-e137a19e5585" containerID="f2efa5c5f277ccd309d90981efa8133449659c1c7ab1db5bb161675a9c3542ed" exitCode=0 Jan 29 08:39:43 crc kubenswrapper[5017]: I0129 08:39:43.118181 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-k57sb" event={"ID":"d5e0be93-aeff-4f04-b902-e137a19e5585","Type":"ContainerDied","Data":"f2efa5c5f277ccd309d90981efa8133449659c1c7ab1db5bb161675a9c3542ed"} Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.664709 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.800780 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-inventory\") pod \"d5e0be93-aeff-4f04-b902-e137a19e5585\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.802277 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ovn-combined-ca-bundle\") pod \"d5e0be93-aeff-4f04-b902-e137a19e5585\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.802343 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d5e0be93-aeff-4f04-b902-e137a19e5585-ovncontroller-config-0\") pod \"d5e0be93-aeff-4f04-b902-e137a19e5585\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.802521 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86h2k\" (UniqueName: \"kubernetes.io/projected/d5e0be93-aeff-4f04-b902-e137a19e5585-kube-api-access-86h2k\") pod \"d5e0be93-aeff-4f04-b902-e137a19e5585\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.802545 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ssh-key-openstack-cell1\") pod \"d5e0be93-aeff-4f04-b902-e137a19e5585\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.802603 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ceph\") pod \"d5e0be93-aeff-4f04-b902-e137a19e5585\" (UID: \"d5e0be93-aeff-4f04-b902-e137a19e5585\") " Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.808931 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d5e0be93-aeff-4f04-b902-e137a19e5585" (UID: "d5e0be93-aeff-4f04-b902-e137a19e5585"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.810231 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e0be93-aeff-4f04-b902-e137a19e5585-kube-api-access-86h2k" (OuterVolumeSpecName: "kube-api-access-86h2k") pod "d5e0be93-aeff-4f04-b902-e137a19e5585" (UID: "d5e0be93-aeff-4f04-b902-e137a19e5585"). InnerVolumeSpecName "kube-api-access-86h2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.810352 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ceph" (OuterVolumeSpecName: "ceph") pod "d5e0be93-aeff-4f04-b902-e137a19e5585" (UID: "d5e0be93-aeff-4f04-b902-e137a19e5585"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.832042 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e0be93-aeff-4f04-b902-e137a19e5585-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d5e0be93-aeff-4f04-b902-e137a19e5585" (UID: "d5e0be93-aeff-4f04-b902-e137a19e5585"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.833544 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-inventory" (OuterVolumeSpecName: "inventory") pod "d5e0be93-aeff-4f04-b902-e137a19e5585" (UID: "d5e0be93-aeff-4f04-b902-e137a19e5585"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.833573 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d5e0be93-aeff-4f04-b902-e137a19e5585" (UID: "d5e0be93-aeff-4f04-b902-e137a19e5585"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.906284 5017 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d5e0be93-aeff-4f04-b902-e137a19e5585-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.906327 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86h2k\" (UniqueName: \"kubernetes.io/projected/d5e0be93-aeff-4f04-b902-e137a19e5585-kube-api-access-86h2k\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.906337 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.906373 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.906385 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:44 crc kubenswrapper[5017]: I0129 08:39:44.906395 5017 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e0be93-aeff-4f04-b902-e137a19e5585-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.140480 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-k57sb" event={"ID":"d5e0be93-aeff-4f04-b902-e137a19e5585","Type":"ContainerDied","Data":"e7c1cd101cd1687f4a39f1c0d6339f1d55849d6611472da41b474b2b81749875"} Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.140534 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7c1cd101cd1687f4a39f1c0d6339f1d55849d6611472da41b474b2b81749875" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.140592 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-k57sb" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.234708 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-bxsq2"] Jan 29 08:39:45 crc kubenswrapper[5017]: E0129 08:39:45.235299 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5363c6b8-2681-4be1-b024-583058bf95fe" containerName="registry-server" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.235326 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5363c6b8-2681-4be1-b024-583058bf95fe" containerName="registry-server" Jan 29 08:39:45 crc kubenswrapper[5017]: E0129 08:39:45.235352 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5363c6b8-2681-4be1-b024-583058bf95fe" containerName="extract-utilities" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.235362 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5363c6b8-2681-4be1-b024-583058bf95fe" containerName="extract-utilities" Jan 29 08:39:45 crc kubenswrapper[5017]: E0129 08:39:45.235411 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e0be93-aeff-4f04-b902-e137a19e5585" containerName="ovn-openstack-openstack-cell1" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.235423 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e0be93-aeff-4f04-b902-e137a19e5585" containerName="ovn-openstack-openstack-cell1" Jan 29 08:39:45 crc kubenswrapper[5017]: E0129 08:39:45.235444 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5363c6b8-2681-4be1-b024-583058bf95fe" containerName="extract-content" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.235453 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="5363c6b8-2681-4be1-b024-583058bf95fe" containerName="extract-content" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.235707 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="5363c6b8-2681-4be1-b024-583058bf95fe" containerName="registry-server" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.235740 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e0be93-aeff-4f04-b902-e137a19e5585" containerName="ovn-openstack-openstack-cell1" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.236804 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.246038 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.246246 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.246374 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.246466 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.246883 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.248907 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-bxsq2"] Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.249334 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.315421 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.315503 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.315544 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.315573 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtjv\" (UniqueName: \"kubernetes.io/projected/653c0354-71ae-4f98-87ff-df55efbd5297-kube-api-access-9xtjv\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.315728 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.315791 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.315873 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.417986 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.418063 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.418233 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.418264 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.418292 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.418321 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtjv\" (UniqueName: \"kubernetes.io/projected/653c0354-71ae-4f98-87ff-df55efbd5297-kube-api-access-9xtjv\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.418356 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.423130 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.423228 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.424188 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.424750 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.425256 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.425602 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.441020 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtjv\" (UniqueName: \"kubernetes.io/projected/653c0354-71ae-4f98-87ff-df55efbd5297-kube-api-access-9xtjv\") pod \"neutron-metadata-openstack-openstack-cell1-bxsq2\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:45 crc kubenswrapper[5017]: I0129 08:39:45.567790 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:39:46 crc kubenswrapper[5017]: I0129 08:39:46.130831 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-bxsq2"] Jan 29 08:39:46 crc kubenswrapper[5017]: I0129 08:39:46.143306 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:39:46 crc kubenswrapper[5017]: I0129 08:39:46.154858 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" event={"ID":"653c0354-71ae-4f98-87ff-df55efbd5297","Type":"ContainerStarted","Data":"aad2bfe899077ee7d9a697219c325426a2009faac3cef7005c45c6cddb9df3a9"} Jan 29 08:39:47 crc kubenswrapper[5017]: I0129 08:39:47.169151 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" event={"ID":"653c0354-71ae-4f98-87ff-df55efbd5297","Type":"ContainerStarted","Data":"acb758f0bf7b0c6930237bab4407a6e739f872cbc91838430947a76a592cf057"} Jan 29 08:39:47 crc kubenswrapper[5017]: I0129 08:39:47.208945 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" podStartSLOduration=1.51550208 podStartE2EDuration="2.208912212s" podCreationTimestamp="2026-01-29 08:39:45 +0000 UTC" firstStartedPulling="2026-01-29 08:39:46.142937379 +0000 UTC m=+7472.517384989" lastFinishedPulling="2026-01-29 08:39:46.836347511 +0000 UTC m=+7473.210795121" observedRunningTime="2026-01-29 08:39:47.194548931 +0000 UTC m=+7473.568996551" watchObservedRunningTime="2026-01-29 08:39:47.208912212 +0000 UTC m=+7473.583359822" Jan 29 08:40:26 crc kubenswrapper[5017]: I0129 08:40:26.539860 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:40:26 crc kubenswrapper[5017]: I0129 08:40:26.542230 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:40:35 crc kubenswrapper[5017]: I0129 08:40:35.684770 5017 generic.go:334] "Generic (PLEG): container finished" podID="653c0354-71ae-4f98-87ff-df55efbd5297" containerID="acb758f0bf7b0c6930237bab4407a6e739f872cbc91838430947a76a592cf057" exitCode=0 Jan 29 08:40:35 crc kubenswrapper[5017]: I0129 08:40:35.684868 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" event={"ID":"653c0354-71ae-4f98-87ff-df55efbd5297","Type":"ContainerDied","Data":"acb758f0bf7b0c6930237bab4407a6e739f872cbc91838430947a76a592cf057"} Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.217348 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.327616 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ceph\") pod \"653c0354-71ae-4f98-87ff-df55efbd5297\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.327790 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ssh-key-openstack-cell1\") pod \"653c0354-71ae-4f98-87ff-df55efbd5297\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.327863 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xtjv\" (UniqueName: \"kubernetes.io/projected/653c0354-71ae-4f98-87ff-df55efbd5297-kube-api-access-9xtjv\") pod \"653c0354-71ae-4f98-87ff-df55efbd5297\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.328011 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-nova-metadata-neutron-config-0\") pod \"653c0354-71ae-4f98-87ff-df55efbd5297\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.328043 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-metadata-combined-ca-bundle\") pod \"653c0354-71ae-4f98-87ff-df55efbd5297\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.328125 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-inventory\") pod \"653c0354-71ae-4f98-87ff-df55efbd5297\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.328196 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-ovn-metadata-agent-neutron-config-0\") pod \"653c0354-71ae-4f98-87ff-df55efbd5297\" (UID: \"653c0354-71ae-4f98-87ff-df55efbd5297\") " Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.336244 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ceph" (OuterVolumeSpecName: "ceph") pod "653c0354-71ae-4f98-87ff-df55efbd5297" (UID: "653c0354-71ae-4f98-87ff-df55efbd5297"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.336774 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653c0354-71ae-4f98-87ff-df55efbd5297-kube-api-access-9xtjv" (OuterVolumeSpecName: "kube-api-access-9xtjv") pod "653c0354-71ae-4f98-87ff-df55efbd5297" (UID: "653c0354-71ae-4f98-87ff-df55efbd5297"). InnerVolumeSpecName "kube-api-access-9xtjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.337473 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "653c0354-71ae-4f98-87ff-df55efbd5297" (UID: "653c0354-71ae-4f98-87ff-df55efbd5297"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.369525 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "653c0354-71ae-4f98-87ff-df55efbd5297" (UID: "653c0354-71ae-4f98-87ff-df55efbd5297"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.371860 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-inventory" (OuterVolumeSpecName: "inventory") pod "653c0354-71ae-4f98-87ff-df55efbd5297" (UID: "653c0354-71ae-4f98-87ff-df55efbd5297"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.372201 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "653c0354-71ae-4f98-87ff-df55efbd5297" (UID: "653c0354-71ae-4f98-87ff-df55efbd5297"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.376628 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "653c0354-71ae-4f98-87ff-df55efbd5297" (UID: "653c0354-71ae-4f98-87ff-df55efbd5297"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.430708 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xtjv\" (UniqueName: \"kubernetes.io/projected/653c0354-71ae-4f98-87ff-df55efbd5297-kube-api-access-9xtjv\") on node \"crc\" DevicePath \"\"" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.430758 5017 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.430772 5017 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.430787 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.430804 5017 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.430817 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.430831 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/653c0354-71ae-4f98-87ff-df55efbd5297-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.718179 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" event={"ID":"653c0354-71ae-4f98-87ff-df55efbd5297","Type":"ContainerDied","Data":"aad2bfe899077ee7d9a697219c325426a2009faac3cef7005c45c6cddb9df3a9"} Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.719769 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad2bfe899077ee7d9a697219c325426a2009faac3cef7005c45c6cddb9df3a9" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.723124 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-bxsq2" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.804816 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-2t5ls"] Jan 29 08:40:37 crc kubenswrapper[5017]: E0129 08:40:37.805374 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653c0354-71ae-4f98-87ff-df55efbd5297" containerName="neutron-metadata-openstack-openstack-cell1" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.805394 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="653c0354-71ae-4f98-87ff-df55efbd5297" containerName="neutron-metadata-openstack-openstack-cell1" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.805614 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="653c0354-71ae-4f98-87ff-df55efbd5297" containerName="neutron-metadata-openstack-openstack-cell1" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.806431 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.812459 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.812879 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.813131 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.813291 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.813438 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.823815 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-2t5ls"] Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.945810 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8bs\" (UniqueName: \"kubernetes.io/projected/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-kube-api-access-4m8bs\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.945869 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.946025 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-inventory\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.946102 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ceph\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.946168 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:37 crc kubenswrapper[5017]: I0129 08:40:37.946194 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.048018 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ceph\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.048134 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.048165 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.048215 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8bs\" (UniqueName: \"kubernetes.io/projected/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-kube-api-access-4m8bs\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.048249 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.048387 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-inventory\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.054574 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-inventory\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.054736 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.055569 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.055688 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ceph\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.061862 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.066762 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8bs\" (UniqueName: \"kubernetes.io/projected/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-kube-api-access-4m8bs\") pod \"libvirt-openstack-openstack-cell1-2t5ls\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.138438 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.711214 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-2t5ls"] Jan 29 08:40:38 crc kubenswrapper[5017]: I0129 08:40:38.731480 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" event={"ID":"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4","Type":"ContainerStarted","Data":"9aa05dc5aa19368d4b52c712d4bf9ce3c14015e19285d24a1afc540575c3e6c9"} Jan 29 08:40:39 crc kubenswrapper[5017]: I0129 08:40:39.747059 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" event={"ID":"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4","Type":"ContainerStarted","Data":"bb990439bd15c0935551577a483864b24f0147972bab21dd279bd2e9a1622428"} Jan 29 08:40:39 crc kubenswrapper[5017]: I0129 08:40:39.781048 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" podStartSLOduration=2.341121764 podStartE2EDuration="2.781024388s" podCreationTimestamp="2026-01-29 08:40:37 +0000 UTC" firstStartedPulling="2026-01-29 08:40:38.718137711 +0000 UTC m=+7525.092585321" lastFinishedPulling="2026-01-29 08:40:39.158040335 +0000 UTC m=+7525.532487945" observedRunningTime="2026-01-29 08:40:39.76877827 +0000 UTC m=+7526.143225890" watchObservedRunningTime="2026-01-29 08:40:39.781024388 +0000 UTC m=+7526.155471998" Jan 29 08:40:56 crc kubenswrapper[5017]: I0129 08:40:56.539091 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:40:56 crc kubenswrapper[5017]: I0129 08:40:56.539834 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.466919 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-brcbq"] Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.470464 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.481053 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brcbq"] Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.579335 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-utilities\") pod \"certified-operators-brcbq\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.579825 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-catalog-content\") pod \"certified-operators-brcbq\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.580063 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pk6f\" (UniqueName: \"kubernetes.io/projected/93c4cbb3-924e-4cf1-8022-032989597ad0-kube-api-access-7pk6f\") pod \"certified-operators-brcbq\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.682767 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-catalog-content\") pod \"certified-operators-brcbq\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.682887 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pk6f\" (UniqueName: \"kubernetes.io/projected/93c4cbb3-924e-4cf1-8022-032989597ad0-kube-api-access-7pk6f\") pod \"certified-operators-brcbq\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.683000 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-utilities\") pod \"certified-operators-brcbq\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.683506 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-catalog-content\") pod \"certified-operators-brcbq\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.683643 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-utilities\") pod \"certified-operators-brcbq\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.713852 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pk6f\" (UniqueName: \"kubernetes.io/projected/93c4cbb3-924e-4cf1-8022-032989597ad0-kube-api-access-7pk6f\") pod \"certified-operators-brcbq\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:17 crc kubenswrapper[5017]: I0129 08:41:17.796370 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:18 crc kubenswrapper[5017]: I0129 08:41:18.414928 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brcbq"] Jan 29 08:41:19 crc kubenswrapper[5017]: I0129 08:41:19.186547 5017 generic.go:334] "Generic (PLEG): container finished" podID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerID="c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47" exitCode=0 Jan 29 08:41:19 crc kubenswrapper[5017]: I0129 08:41:19.187030 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brcbq" event={"ID":"93c4cbb3-924e-4cf1-8022-032989597ad0","Type":"ContainerDied","Data":"c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47"} Jan 29 08:41:19 crc kubenswrapper[5017]: I0129 08:41:19.187167 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brcbq" event={"ID":"93c4cbb3-924e-4cf1-8022-032989597ad0","Type":"ContainerStarted","Data":"827401b4cf283ef16bea64db489a2c1647c4db280df6142c14e12fd7f68ccdc6"} Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.463489 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-znqww"] Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.468502 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.476535 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znqww"] Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.568158 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlt79\" (UniqueName: \"kubernetes.io/projected/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-kube-api-access-nlt79\") pod \"redhat-operators-znqww\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.568822 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-catalog-content\") pod \"redhat-operators-znqww\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.568944 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-utilities\") pod \"redhat-operators-znqww\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.671415 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-catalog-content\") pod \"redhat-operators-znqww\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.671484 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-utilities\") pod \"redhat-operators-znqww\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.671525 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlt79\" (UniqueName: \"kubernetes.io/projected/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-kube-api-access-nlt79\") pod \"redhat-operators-znqww\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.672534 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-catalog-content\") pod \"redhat-operators-znqww\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.672539 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-utilities\") pod \"redhat-operators-znqww\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.697414 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlt79\" (UniqueName: \"kubernetes.io/projected/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-kube-api-access-nlt79\") pod \"redhat-operators-znqww\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:20 crc kubenswrapper[5017]: I0129 08:41:20.829714 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:21 crc kubenswrapper[5017]: I0129 08:41:21.219865 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brcbq" event={"ID":"93c4cbb3-924e-4cf1-8022-032989597ad0","Type":"ContainerStarted","Data":"5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8"} Jan 29 08:41:21 crc kubenswrapper[5017]: I0129 08:41:21.368636 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znqww"] Jan 29 08:41:22 crc kubenswrapper[5017]: I0129 08:41:22.269098 5017 generic.go:334] "Generic (PLEG): container finished" podID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerID="5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8" exitCode=0 Jan 29 08:41:22 crc kubenswrapper[5017]: I0129 08:41:22.269739 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brcbq" event={"ID":"93c4cbb3-924e-4cf1-8022-032989597ad0","Type":"ContainerDied","Data":"5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8"} Jan 29 08:41:22 crc kubenswrapper[5017]: I0129 08:41:22.280939 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znqww" event={"ID":"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f","Type":"ContainerStarted","Data":"47cd6d267bb9c99f09cad1351343d2ede90597c189c8595b4d81f5a9d24dcbea"} Jan 29 08:41:23 crc kubenswrapper[5017]: I0129 08:41:23.291950 5017 generic.go:334] "Generic (PLEG): container finished" podID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerID="7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a" exitCode=0 Jan 29 08:41:23 crc kubenswrapper[5017]: I0129 08:41:23.292068 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znqww" event={"ID":"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f","Type":"ContainerDied","Data":"7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a"} Jan 29 08:41:23 crc kubenswrapper[5017]: I0129 08:41:23.298763 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brcbq" event={"ID":"93c4cbb3-924e-4cf1-8022-032989597ad0","Type":"ContainerStarted","Data":"f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035"} Jan 29 08:41:23 crc kubenswrapper[5017]: I0129 08:41:23.340251 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-brcbq" podStartSLOduration=2.846470241 podStartE2EDuration="6.340223966s" podCreationTimestamp="2026-01-29 08:41:17 +0000 UTC" firstStartedPulling="2026-01-29 08:41:19.196009938 +0000 UTC m=+7565.570457548" lastFinishedPulling="2026-01-29 08:41:22.689763663 +0000 UTC m=+7569.064211273" observedRunningTime="2026-01-29 08:41:23.33672228 +0000 UTC m=+7569.711169910" watchObservedRunningTime="2026-01-29 08:41:23.340223966 +0000 UTC m=+7569.714671576" Jan 29 08:41:24 crc kubenswrapper[5017]: I0129 08:41:24.310672 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znqww" event={"ID":"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f","Type":"ContainerStarted","Data":"3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9"} Jan 29 08:41:26 crc kubenswrapper[5017]: I0129 08:41:26.539469 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:41:26 crc kubenswrapper[5017]: I0129 08:41:26.540346 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:41:26 crc kubenswrapper[5017]: I0129 08:41:26.540392 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:41:26 crc kubenswrapper[5017]: I0129 08:41:26.541381 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"529e90843cc8236dc8610886df83447cc84a03878dfc7968f456b6e4cf77b3e9"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:41:26 crc kubenswrapper[5017]: I0129 08:41:26.541439 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://529e90843cc8236dc8610886df83447cc84a03878dfc7968f456b6e4cf77b3e9" gracePeriod=600 Jan 29 08:41:27 crc kubenswrapper[5017]: I0129 08:41:27.343479 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="529e90843cc8236dc8610886df83447cc84a03878dfc7968f456b6e4cf77b3e9" exitCode=0 Jan 29 08:41:27 crc kubenswrapper[5017]: I0129 08:41:27.343547 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"529e90843cc8236dc8610886df83447cc84a03878dfc7968f456b6e4cf77b3e9"} Jan 29 08:41:27 crc kubenswrapper[5017]: I0129 08:41:27.344023 5017 scope.go:117] "RemoveContainer" containerID="29716ea24128bdf6ee46dfb15f1f1dfd3446061da8e9bec6ed837055b76aede7" Jan 29 08:41:27 crc kubenswrapper[5017]: I0129 08:41:27.796950 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:27 crc kubenswrapper[5017]: I0129 08:41:27.797678 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:27 crc kubenswrapper[5017]: I0129 08:41:27.859923 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:28 crc kubenswrapper[5017]: I0129 08:41:28.356250 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063"} Jan 29 08:41:28 crc kubenswrapper[5017]: I0129 08:41:28.412827 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:28 crc kubenswrapper[5017]: I0129 08:41:28.479631 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-brcbq"] Jan 29 08:41:30 crc kubenswrapper[5017]: I0129 08:41:30.381995 5017 generic.go:334] "Generic (PLEG): container finished" podID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerID="3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9" exitCode=0 Jan 29 08:41:30 crc kubenswrapper[5017]: I0129 08:41:30.382053 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znqww" event={"ID":"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f","Type":"ContainerDied","Data":"3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9"} Jan 29 08:41:30 crc kubenswrapper[5017]: I0129 08:41:30.383148 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-brcbq" podUID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerName="registry-server" containerID="cri-o://f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035" gracePeriod=2 Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.067912 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.244258 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-utilities\") pod \"93c4cbb3-924e-4cf1-8022-032989597ad0\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.244420 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pk6f\" (UniqueName: \"kubernetes.io/projected/93c4cbb3-924e-4cf1-8022-032989597ad0-kube-api-access-7pk6f\") pod \"93c4cbb3-924e-4cf1-8022-032989597ad0\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.244455 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-catalog-content\") pod \"93c4cbb3-924e-4cf1-8022-032989597ad0\" (UID: \"93c4cbb3-924e-4cf1-8022-032989597ad0\") " Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.244892 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-utilities" (OuterVolumeSpecName: "utilities") pod "93c4cbb3-924e-4cf1-8022-032989597ad0" (UID: "93c4cbb3-924e-4cf1-8022-032989597ad0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.245356 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.265325 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c4cbb3-924e-4cf1-8022-032989597ad0-kube-api-access-7pk6f" (OuterVolumeSpecName: "kube-api-access-7pk6f") pod "93c4cbb3-924e-4cf1-8022-032989597ad0" (UID: "93c4cbb3-924e-4cf1-8022-032989597ad0"). InnerVolumeSpecName "kube-api-access-7pk6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.294895 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93c4cbb3-924e-4cf1-8022-032989597ad0" (UID: "93c4cbb3-924e-4cf1-8022-032989597ad0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.347855 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pk6f\" (UniqueName: \"kubernetes.io/projected/93c4cbb3-924e-4cf1-8022-032989597ad0-kube-api-access-7pk6f\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.347909 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c4cbb3-924e-4cf1-8022-032989597ad0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.395187 5017 generic.go:334] "Generic (PLEG): container finished" podID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerID="f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035" exitCode=0 Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.395327 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brcbq" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.396515 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brcbq" event={"ID":"93c4cbb3-924e-4cf1-8022-032989597ad0","Type":"ContainerDied","Data":"f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035"} Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.396610 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brcbq" event={"ID":"93c4cbb3-924e-4cf1-8022-032989597ad0","Type":"ContainerDied","Data":"827401b4cf283ef16bea64db489a2c1647c4db280df6142c14e12fd7f68ccdc6"} Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.396687 5017 scope.go:117] "RemoveContainer" containerID="f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.400689 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znqww" event={"ID":"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f","Type":"ContainerStarted","Data":"fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b"} Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.420828 5017 scope.go:117] "RemoveContainer" containerID="5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.425900 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-znqww" podStartSLOduration=3.884272212 podStartE2EDuration="11.425681388s" podCreationTimestamp="2026-01-29 08:41:20 +0000 UTC" firstStartedPulling="2026-01-29 08:41:23.29488631 +0000 UTC m=+7569.669333920" lastFinishedPulling="2026-01-29 08:41:30.836295486 +0000 UTC m=+7577.210743096" observedRunningTime="2026-01-29 08:41:31.423193788 +0000 UTC m=+7577.797641408" watchObservedRunningTime="2026-01-29 08:41:31.425681388 +0000 UTC m=+7577.800128998" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.462532 5017 scope.go:117] "RemoveContainer" containerID="c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.465043 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-brcbq"] Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.478013 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-brcbq"] Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.523050 5017 scope.go:117] "RemoveContainer" containerID="f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035" Jan 29 08:41:31 crc kubenswrapper[5017]: E0129 08:41:31.524264 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035\": container with ID starting with f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035 not found: ID does not exist" containerID="f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.524349 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035"} err="failed to get container status \"f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035\": rpc error: code = NotFound desc = could not find container \"f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035\": container with ID starting with f166939d0e2c0bbbfb19373e22da3d8ea0c10a21d5ee8a89526facf6d74f3035 not found: ID does not exist" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.524400 5017 scope.go:117] "RemoveContainer" containerID="5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8" Jan 29 08:41:31 crc kubenswrapper[5017]: E0129 08:41:31.525048 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8\": container with ID starting with 5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8 not found: ID does not exist" containerID="5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.525098 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8"} err="failed to get container status \"5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8\": rpc error: code = NotFound desc = could not find container \"5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8\": container with ID starting with 5cf30bc53fd021dea1e27be063b89efd73575889799905e55f452d2db25592f8 not found: ID does not exist" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.525135 5017 scope.go:117] "RemoveContainer" containerID="c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47" Jan 29 08:41:31 crc kubenswrapper[5017]: E0129 08:41:31.525501 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47\": container with ID starting with c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47 not found: ID does not exist" containerID="c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47" Jan 29 08:41:31 crc kubenswrapper[5017]: I0129 08:41:31.525541 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47"} err="failed to get container status \"c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47\": rpc error: code = NotFound desc = could not find container \"c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47\": container with ID starting with c3a8d2fb074e8d78237cba19d1f04888ee9dfb06b8eb145923d1a161b20cef47 not found: ID does not exist" Jan 29 08:41:32 crc kubenswrapper[5017]: I0129 08:41:32.329453 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c4cbb3-924e-4cf1-8022-032989597ad0" path="/var/lib/kubelet/pods/93c4cbb3-924e-4cf1-8022-032989597ad0/volumes" Jan 29 08:41:40 crc kubenswrapper[5017]: I0129 08:41:40.830612 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:40 crc kubenswrapper[5017]: I0129 08:41:40.832715 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:41 crc kubenswrapper[5017]: I0129 08:41:41.888574 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-znqww" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerName="registry-server" probeResult="failure" output=< Jan 29 08:41:41 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:41:41 crc kubenswrapper[5017]: > Jan 29 08:41:50 crc kubenswrapper[5017]: I0129 08:41:50.882407 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:50 crc kubenswrapper[5017]: I0129 08:41:50.943042 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:51 crc kubenswrapper[5017]: I0129 08:41:51.669508 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znqww"] Jan 29 08:41:52 crc kubenswrapper[5017]: I0129 08:41:52.616757 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-znqww" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerName="registry-server" containerID="cri-o://fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b" gracePeriod=2 Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.189124 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.288269 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-utilities\") pod \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.288602 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-catalog-content\") pod \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.288839 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlt79\" (UniqueName: \"kubernetes.io/projected/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-kube-api-access-nlt79\") pod \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\" (UID: \"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f\") " Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.289372 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-utilities" (OuterVolumeSpecName: "utilities") pod "1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" (UID: "1684eb2c-6ad4-465d-a9dd-1ce71e9f038f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.290390 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.295681 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-kube-api-access-nlt79" (OuterVolumeSpecName: "kube-api-access-nlt79") pod "1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" (UID: "1684eb2c-6ad4-465d-a9dd-1ce71e9f038f"). InnerVolumeSpecName "kube-api-access-nlt79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.392867 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlt79\" (UniqueName: \"kubernetes.io/projected/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-kube-api-access-nlt79\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.428640 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" (UID: "1684eb2c-6ad4-465d-a9dd-1ce71e9f038f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.497398 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.628646 5017 generic.go:334] "Generic (PLEG): container finished" podID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerID="fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b" exitCode=0 Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.628709 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znqww" event={"ID":"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f","Type":"ContainerDied","Data":"fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b"} Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.628746 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znqww" event={"ID":"1684eb2c-6ad4-465d-a9dd-1ce71e9f038f","Type":"ContainerDied","Data":"47cd6d267bb9c99f09cad1351343d2ede90597c189c8595b4d81f5a9d24dcbea"} Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.628765 5017 scope.go:117] "RemoveContainer" containerID="fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.628919 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znqww" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.659196 5017 scope.go:117] "RemoveContainer" containerID="3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.662087 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znqww"] Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.672325 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-znqww"] Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.690120 5017 scope.go:117] "RemoveContainer" containerID="7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.765779 5017 scope.go:117] "RemoveContainer" containerID="fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b" Jan 29 08:41:53 crc kubenswrapper[5017]: E0129 08:41:53.766654 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b\": container with ID starting with fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b not found: ID does not exist" containerID="fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.766685 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b"} err="failed to get container status \"fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b\": rpc error: code = NotFound desc = could not find container \"fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b\": container with ID starting with fb043ad80e52dcd466a8532634b29e406c17e1bf8d350e58fbe1a1f70e7cab0b not found: ID does not exist" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.766707 5017 scope.go:117] "RemoveContainer" containerID="3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9" Jan 29 08:41:53 crc kubenswrapper[5017]: E0129 08:41:53.767014 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9\": container with ID starting with 3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9 not found: ID does not exist" containerID="3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.767034 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9"} err="failed to get container status \"3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9\": rpc error: code = NotFound desc = could not find container \"3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9\": container with ID starting with 3efd5535eb8c7f5748aaa8ca16e357e1ff8fd2285f56def41f0257d024342ba9 not found: ID does not exist" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.767048 5017 scope.go:117] "RemoveContainer" containerID="7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a" Jan 29 08:41:53 crc kubenswrapper[5017]: E0129 08:41:53.767395 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a\": container with ID starting with 7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a not found: ID does not exist" containerID="7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a" Jan 29 08:41:53 crc kubenswrapper[5017]: I0129 08:41:53.767442 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a"} err="failed to get container status \"7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a\": rpc error: code = NotFound desc = could not find container \"7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a\": container with ID starting with 7726e1c8a6370eb6cb591f55448161d85653452bf46a8dbe6f53c8264ed55b3a not found: ID does not exist" Jan 29 08:41:54 crc kubenswrapper[5017]: I0129 08:41:54.327389 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" path="/var/lib/kubelet/pods/1684eb2c-6ad4-465d-a9dd-1ce71e9f038f/volumes" Jan 29 08:43:56 crc kubenswrapper[5017]: I0129 08:43:56.539518 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:43:56 crc kubenswrapper[5017]: I0129 08:43:56.540374 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:44:26 crc kubenswrapper[5017]: I0129 08:44:26.538875 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:44:26 crc kubenswrapper[5017]: I0129 08:44:26.539764 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:44:56 crc kubenswrapper[5017]: I0129 08:44:56.540212 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:44:56 crc kubenswrapper[5017]: I0129 08:44:56.541125 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:44:56 crc kubenswrapper[5017]: I0129 08:44:56.541187 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:44:56 crc kubenswrapper[5017]: I0129 08:44:56.542231 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:44:56 crc kubenswrapper[5017]: I0129 08:44:56.542295 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" gracePeriod=600 Jan 29 08:44:56 crc kubenswrapper[5017]: E0129 08:44:56.672821 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:44:56 crc kubenswrapper[5017]: I0129 08:44:56.867550 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" exitCode=0 Jan 29 08:44:56 crc kubenswrapper[5017]: I0129 08:44:56.867616 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063"} Jan 29 08:44:56 crc kubenswrapper[5017]: I0129 08:44:56.867672 5017 scope.go:117] "RemoveContainer" containerID="529e90843cc8236dc8610886df83447cc84a03878dfc7968f456b6e4cf77b3e9" Jan 29 08:44:56 crc kubenswrapper[5017]: I0129 08:44:56.868883 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:44:56 crc kubenswrapper[5017]: E0129 08:44:56.869343 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:44:59 crc kubenswrapper[5017]: I0129 08:44:59.912164 5017 generic.go:334] "Generic (PLEG): container finished" podID="1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" containerID="bb990439bd15c0935551577a483864b24f0147972bab21dd279bd2e9a1622428" exitCode=0 Jan 29 08:44:59 crc kubenswrapper[5017]: I0129 08:44:59.912300 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" event={"ID":"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4","Type":"ContainerDied","Data":"bb990439bd15c0935551577a483864b24f0147972bab21dd279bd2e9a1622428"} Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.173031 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm"] Jan 29 08:45:00 crc kubenswrapper[5017]: E0129 08:45:00.173620 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerName="registry-server" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.173648 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerName="registry-server" Jan 29 08:45:00 crc kubenswrapper[5017]: E0129 08:45:00.173669 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerName="extract-utilities" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.173679 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerName="extract-utilities" Jan 29 08:45:00 crc kubenswrapper[5017]: E0129 08:45:00.173701 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerName="extract-content" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.173709 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerName="extract-content" Jan 29 08:45:00 crc kubenswrapper[5017]: E0129 08:45:00.173728 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerName="registry-server" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.173737 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerName="registry-server" Jan 29 08:45:00 crc kubenswrapper[5017]: E0129 08:45:00.173773 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerName="extract-content" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.173782 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerName="extract-content" Jan 29 08:45:00 crc kubenswrapper[5017]: E0129 08:45:00.173810 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerName="extract-utilities" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.173818 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerName="extract-utilities" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.174121 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="1684eb2c-6ad4-465d-a9dd-1ce71e9f038f" containerName="registry-server" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.174144 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c4cbb3-924e-4cf1-8022-032989597ad0" containerName="registry-server" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.175115 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.181519 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.181916 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.231222 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm"] Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.269899 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bgrn\" (UniqueName: \"kubernetes.io/projected/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-kube-api-access-4bgrn\") pod \"collect-profiles-29494605-hhjnm\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.269989 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-secret-volume\") pod \"collect-profiles-29494605-hhjnm\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.270217 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-config-volume\") pod \"collect-profiles-29494605-hhjnm\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.372365 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-config-volume\") pod \"collect-profiles-29494605-hhjnm\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.374058 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bgrn\" (UniqueName: \"kubernetes.io/projected/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-kube-api-access-4bgrn\") pod \"collect-profiles-29494605-hhjnm\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.374116 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-secret-volume\") pod \"collect-profiles-29494605-hhjnm\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.387762 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-config-volume\") pod \"collect-profiles-29494605-hhjnm\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.397156 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-secret-volume\") pod \"collect-profiles-29494605-hhjnm\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.399736 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bgrn\" (UniqueName: \"kubernetes.io/projected/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-kube-api-access-4bgrn\") pod \"collect-profiles-29494605-hhjnm\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:00 crc kubenswrapper[5017]: I0129 08:45:00.533481 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.067366 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm"] Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.355510 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.425002 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-inventory\") pod \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.425257 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m8bs\" (UniqueName: \"kubernetes.io/projected/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-kube-api-access-4m8bs\") pod \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.425413 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-secret-0\") pod \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.425509 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ssh-key-openstack-cell1\") pod \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.425573 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-combined-ca-bundle\") pod \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.425611 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ceph\") pod \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\" (UID: \"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4\") " Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.442367 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ceph" (OuterVolumeSpecName: "ceph") pod "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" (UID: "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.442896 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-kube-api-access-4m8bs" (OuterVolumeSpecName: "kube-api-access-4m8bs") pod "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" (UID: "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4"). InnerVolumeSpecName "kube-api-access-4m8bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.443041 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" (UID: "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.465778 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-inventory" (OuterVolumeSpecName: "inventory") pod "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" (UID: "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.474218 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" (UID: "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.487441 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" (UID: "1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.530018 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m8bs\" (UniqueName: \"kubernetes.io/projected/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-kube-api-access-4m8bs\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.530063 5017 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.530077 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.530087 5017 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.530098 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.530109 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.931373 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" event={"ID":"1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4","Type":"ContainerDied","Data":"9aa05dc5aa19368d4b52c712d4bf9ce3c14015e19285d24a1afc540575c3e6c9"} Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.931941 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aa05dc5aa19368d4b52c712d4bf9ce3c14015e19285d24a1afc540575c3e6c9" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.931626 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-2t5ls" Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.934764 5017 generic.go:334] "Generic (PLEG): container finished" podID="c09df49c-f41d-4ceb-a343-4c5dbc63ef1c" containerID="adbdac5e28e28b17320b409f4d1add41f19f0b718e57425d89b2e9c86a636f5b" exitCode=0 Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.934828 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" event={"ID":"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c","Type":"ContainerDied","Data":"adbdac5e28e28b17320b409f4d1add41f19f0b718e57425d89b2e9c86a636f5b"} Jan 29 08:45:01 crc kubenswrapper[5017]: I0129 08:45:01.934904 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" event={"ID":"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c","Type":"ContainerStarted","Data":"f792b94a92ae26a3c339670bf4c2a4a1a990cf4eacd48655f548fd511534dd05"} Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.015037 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-lcmvs"] Jan 29 08:45:02 crc kubenswrapper[5017]: E0129 08:45:02.015571 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" containerName="libvirt-openstack-openstack-cell1" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.015590 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" containerName="libvirt-openstack-openstack-cell1" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.015821 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4" containerName="libvirt-openstack-openstack-cell1" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.016707 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.020093 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.020728 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.020942 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.021131 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.021322 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.023985 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.024270 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.029232 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-lcmvs"] Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.142949 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143066 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143108 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-inventory\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143216 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143247 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143271 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143320 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql927\" (UniqueName: \"kubernetes.io/projected/b1265fe0-ed65-4320-b3c2-016f53ae3a71-kube-api-access-ql927\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143362 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143410 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143478 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.143521 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ceph\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.245461 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.245534 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.245609 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.245668 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ceph\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.245717 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.245796 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.245830 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-inventory\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.245924 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.245974 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.246010 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.246043 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql927\" (UniqueName: \"kubernetes.io/projected/b1265fe0-ed65-4320-b3c2-016f53ae3a71-kube-api-access-ql927\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.247004 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.247725 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.252367 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.252416 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.252420 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.252663 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.252731 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-inventory\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.253277 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ceph\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.255889 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.257678 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.265584 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql927\" (UniqueName: \"kubernetes.io/projected/b1265fe0-ed65-4320-b3c2-016f53ae3a71-kube-api-access-ql927\") pod \"nova-cell1-openstack-openstack-cell1-lcmvs\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:02 crc kubenswrapper[5017]: I0129 08:45:02.336404 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.466521 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-lcmvs"] Jan 29 08:45:03 crc kubenswrapper[5017]: W0129 08:45:03.500491 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1265fe0_ed65_4320_b3c2_016f53ae3a71.slice/crio-dfb806017bc7b4b5544d84066df1884efb9e43d4160fa48cefc437669ac48b18 WatchSource:0}: Error finding container dfb806017bc7b4b5544d84066df1884efb9e43d4160fa48cefc437669ac48b18: Status 404 returned error can't find the container with id dfb806017bc7b4b5544d84066df1884efb9e43d4160fa48cefc437669ac48b18 Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.502718 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.794482 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.906563 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bgrn\" (UniqueName: \"kubernetes.io/projected/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-kube-api-access-4bgrn\") pod \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.906886 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-config-volume\") pod \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.907166 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-secret-volume\") pod \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\" (UID: \"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c\") " Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.907916 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "c09df49c-f41d-4ceb-a343-4c5dbc63ef1c" (UID: "c09df49c-f41d-4ceb-a343-4c5dbc63ef1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.908212 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.913506 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-kube-api-access-4bgrn" (OuterVolumeSpecName: "kube-api-access-4bgrn") pod "c09df49c-f41d-4ceb-a343-4c5dbc63ef1c" (UID: "c09df49c-f41d-4ceb-a343-4c5dbc63ef1c"). InnerVolumeSpecName "kube-api-access-4bgrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:03 crc kubenswrapper[5017]: I0129 08:45:03.914051 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c09df49c-f41d-4ceb-a343-4c5dbc63ef1c" (UID: "c09df49c-f41d-4ceb-a343-4c5dbc63ef1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:04 crc kubenswrapper[5017]: I0129 08:45:04.010945 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bgrn\" (UniqueName: \"kubernetes.io/projected/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-kube-api-access-4bgrn\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:04 crc kubenswrapper[5017]: I0129 08:45:04.011010 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c09df49c-f41d-4ceb-a343-4c5dbc63ef1c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:04 crc kubenswrapper[5017]: I0129 08:45:04.413848 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" event={"ID":"c09df49c-f41d-4ceb-a343-4c5dbc63ef1c","Type":"ContainerDied","Data":"f792b94a92ae26a3c339670bf4c2a4a1a990cf4eacd48655f548fd511534dd05"} Jan 29 08:45:04 crc kubenswrapper[5017]: I0129 08:45:04.414425 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f792b94a92ae26a3c339670bf4c2a4a1a990cf4eacd48655f548fd511534dd05" Jan 29 08:45:04 crc kubenswrapper[5017]: I0129 08:45:04.413878 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-hhjnm" Jan 29 08:45:04 crc kubenswrapper[5017]: I0129 08:45:04.417633 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" event={"ID":"b1265fe0-ed65-4320-b3c2-016f53ae3a71","Type":"ContainerStarted","Data":"dfb806017bc7b4b5544d84066df1884efb9e43d4160fa48cefc437669ac48b18"} Jan 29 08:45:04 crc kubenswrapper[5017]: I0129 08:45:04.878275 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w"] Jan 29 08:45:04 crc kubenswrapper[5017]: I0129 08:45:04.888070 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-z4d8w"] Jan 29 08:45:06 crc kubenswrapper[5017]: I0129 08:45:06.331292 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa539280-1219-4242-8b9a-69ef09b61530" path="/var/lib/kubelet/pods/aa539280-1219-4242-8b9a-69ef09b61530/volumes" Jan 29 08:45:09 crc kubenswrapper[5017]: I0129 08:45:09.316493 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:45:09 crc kubenswrapper[5017]: E0129 08:45:09.317653 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:45:09 crc kubenswrapper[5017]: I0129 08:45:09.475703 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" event={"ID":"b1265fe0-ed65-4320-b3c2-016f53ae3a71","Type":"ContainerStarted","Data":"d34378db56cb1e010fbc448454edfc75b21a99aab4c64a74be2a01dd4df77885"} Jan 29 08:45:09 crc kubenswrapper[5017]: I0129 08:45:09.504581 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" podStartSLOduration=3.680168516 podStartE2EDuration="8.504549329s" podCreationTimestamp="2026-01-29 08:45:01 +0000 UTC" firstStartedPulling="2026-01-29 08:45:03.502400865 +0000 UTC m=+7789.876848475" lastFinishedPulling="2026-01-29 08:45:08.326781668 +0000 UTC m=+7794.701229288" observedRunningTime="2026-01-29 08:45:09.495401105 +0000 UTC m=+7795.869848715" watchObservedRunningTime="2026-01-29 08:45:09.504549329 +0000 UTC m=+7795.878996939" Jan 29 08:45:23 crc kubenswrapper[5017]: I0129 08:45:23.317947 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:45:23 crc kubenswrapper[5017]: E0129 08:45:23.319012 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:45:34 crc kubenswrapper[5017]: I0129 08:45:34.323526 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:45:34 crc kubenswrapper[5017]: E0129 08:45:34.324706 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:45:37 crc kubenswrapper[5017]: I0129 08:45:37.630399 5017 scope.go:117] "RemoveContainer" containerID="72e3b9603cb9c971c389843f1acbd8f5e5d6e2e3ace0a0087a68893c902b9b3b" Jan 29 08:45:49 crc kubenswrapper[5017]: I0129 08:45:49.316648 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:45:49 crc kubenswrapper[5017]: E0129 08:45:49.317788 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:46:04 crc kubenswrapper[5017]: I0129 08:46:04.324609 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:46:04 crc kubenswrapper[5017]: E0129 08:46:04.325975 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:46:15 crc kubenswrapper[5017]: I0129 08:46:15.317650 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:46:15 crc kubenswrapper[5017]: E0129 08:46:15.319157 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:46:30 crc kubenswrapper[5017]: I0129 08:46:30.317353 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:46:30 crc kubenswrapper[5017]: E0129 08:46:30.318928 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:46:42 crc kubenswrapper[5017]: I0129 08:46:42.316614 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:46:42 crc kubenswrapper[5017]: E0129 08:46:42.317642 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:46:56 crc kubenswrapper[5017]: I0129 08:46:56.317399 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:46:56 crc kubenswrapper[5017]: E0129 08:46:56.318492 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:47:10 crc kubenswrapper[5017]: I0129 08:47:10.316937 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:47:10 crc kubenswrapper[5017]: E0129 08:47:10.318343 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:47:25 crc kubenswrapper[5017]: I0129 08:47:25.317130 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:47:25 crc kubenswrapper[5017]: E0129 08:47:25.317928 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:47:38 crc kubenswrapper[5017]: I0129 08:47:38.317683 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:47:38 crc kubenswrapper[5017]: E0129 08:47:38.319059 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:47:43 crc kubenswrapper[5017]: I0129 08:47:43.171527 5017 generic.go:334] "Generic (PLEG): container finished" podID="b1265fe0-ed65-4320-b3c2-016f53ae3a71" containerID="d34378db56cb1e010fbc448454edfc75b21a99aab4c64a74be2a01dd4df77885" exitCode=0 Jan 29 08:47:43 crc kubenswrapper[5017]: I0129 08:47:43.171624 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" event={"ID":"b1265fe0-ed65-4320-b3c2-016f53ae3a71","Type":"ContainerDied","Data":"d34378db56cb1e010fbc448454edfc75b21a99aab4c64a74be2a01dd4df77885"} Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.611549 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.696513 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-inventory\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.696668 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ssh-key-openstack-cell1\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.696709 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-combined-ca-bundle\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.697523 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-1\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.697584 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ceph\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.697635 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-1\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.697685 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-0\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.697715 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-1\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.698211 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql927\" (UniqueName: \"kubernetes.io/projected/b1265fe0-ed65-4320-b3c2-016f53ae3a71-kube-api-access-ql927\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.698262 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-0\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.698320 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-0\") pod \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\" (UID: \"b1265fe0-ed65-4320-b3c2-016f53ae3a71\") " Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.704931 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ceph" (OuterVolumeSpecName: "ceph") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.705849 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.706723 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1265fe0-ed65-4320-b3c2-016f53ae3a71-kube-api-access-ql927" (OuterVolumeSpecName: "kube-api-access-ql927") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "kube-api-access-ql927". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.729297 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.733825 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.734332 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.740437 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.741870 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.747662 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.750346 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-inventory" (OuterVolumeSpecName: "inventory") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.752670 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b1265fe0-ed65-4320-b3c2-016f53ae3a71" (UID: "b1265fe0-ed65-4320-b3c2-016f53ae3a71"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801222 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801271 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801282 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801292 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801304 5017 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801315 5017 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801325 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801336 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql927\" (UniqueName: \"kubernetes.io/projected/b1265fe0-ed65-4320-b3c2-016f53ae3a71-kube-api-access-ql927\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801345 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801353 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:44 crc kubenswrapper[5017]: I0129 08:47:44.801362 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1265fe0-ed65-4320-b3c2-016f53ae3a71-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.209614 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" event={"ID":"b1265fe0-ed65-4320-b3c2-016f53ae3a71","Type":"ContainerDied","Data":"dfb806017bc7b4b5544d84066df1884efb9e43d4160fa48cefc437669ac48b18"} Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.209691 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb806017bc7b4b5544d84066df1884efb9e43d4160fa48cefc437669ac48b18" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.209718 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-lcmvs" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.303884 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-tnvqv"] Jan 29 08:47:45 crc kubenswrapper[5017]: E0129 08:47:45.316522 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09df49c-f41d-4ceb-a343-4c5dbc63ef1c" containerName="collect-profiles" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.316554 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09df49c-f41d-4ceb-a343-4c5dbc63ef1c" containerName="collect-profiles" Jan 29 08:47:45 crc kubenswrapper[5017]: E0129 08:47:45.316581 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1265fe0-ed65-4320-b3c2-016f53ae3a71" containerName="nova-cell1-openstack-openstack-cell1" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.316589 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1265fe0-ed65-4320-b3c2-016f53ae3a71" containerName="nova-cell1-openstack-openstack-cell1" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.316893 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1265fe0-ed65-4320-b3c2-016f53ae3a71" containerName="nova-cell1-openstack-openstack-cell1" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.316918 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09df49c-f41d-4ceb-a343-4c5dbc63ef1c" containerName="collect-profiles" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.317897 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.322377 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.322462 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-tnvqv"] Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.322619 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.322688 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.322639 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.327484 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.417342 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.417435 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.417711 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zcp4\" (UniqueName: \"kubernetes.io/projected/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-kube-api-access-9zcp4\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.417780 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.417865 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.417897 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceph\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.417917 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.417978 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-inventory\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.521127 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.521208 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceph\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.521237 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.521287 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-inventory\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.521317 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.521363 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.521432 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zcp4\" (UniqueName: \"kubernetes.io/projected/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-kube-api-access-9zcp4\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.521473 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.526783 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.527499 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.527744 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.527926 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-inventory\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.528264 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.528282 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.545838 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceph\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.548401 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zcp4\" (UniqueName: \"kubernetes.io/projected/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-kube-api-access-9zcp4\") pod \"telemetry-openstack-openstack-cell1-tnvqv\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:45 crc kubenswrapper[5017]: I0129 08:47:45.639318 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:47:46 crc kubenswrapper[5017]: I0129 08:47:46.304854 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-tnvqv"] Jan 29 08:47:47 crc kubenswrapper[5017]: I0129 08:47:47.238506 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" event={"ID":"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09","Type":"ContainerStarted","Data":"7e5724aaa4c1f708fdd83faa2e8c017f73f022fb3b9953dd2a4c7f34fe77f4c6"} Jan 29 08:47:47 crc kubenswrapper[5017]: I0129 08:47:47.239473 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" event={"ID":"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09","Type":"ContainerStarted","Data":"536abefc0983e18e3dd4e142e8b96f73e86651ad6db7b8532fd0e7d8c6c153e5"} Jan 29 08:47:47 crc kubenswrapper[5017]: I0129 08:47:47.264638 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" podStartSLOduration=1.8238779360000001 podStartE2EDuration="2.264611891s" podCreationTimestamp="2026-01-29 08:47:45 +0000 UTC" firstStartedPulling="2026-01-29 08:47:46.321417315 +0000 UTC m=+7952.695864925" lastFinishedPulling="2026-01-29 08:47:46.76215127 +0000 UTC m=+7953.136598880" observedRunningTime="2026-01-29 08:47:47.256436662 +0000 UTC m=+7953.630884272" watchObservedRunningTime="2026-01-29 08:47:47.264611891 +0000 UTC m=+7953.639059501" Jan 29 08:47:49 crc kubenswrapper[5017]: I0129 08:47:49.317301 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:47:49 crc kubenswrapper[5017]: E0129 08:47:49.318004 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:48:03 crc kubenswrapper[5017]: I0129 08:48:03.316160 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:48:03 crc kubenswrapper[5017]: E0129 08:48:03.317236 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:48:17 crc kubenswrapper[5017]: I0129 08:48:17.316802 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:48:17 crc kubenswrapper[5017]: E0129 08:48:17.319508 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:48:28 crc kubenswrapper[5017]: I0129 08:48:28.316782 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:48:28 crc kubenswrapper[5017]: E0129 08:48:28.318284 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:48:43 crc kubenswrapper[5017]: I0129 08:48:43.316845 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:48:43 crc kubenswrapper[5017]: E0129 08:48:43.318215 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:48:58 crc kubenswrapper[5017]: I0129 08:48:58.317102 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:48:58 crc kubenswrapper[5017]: E0129 08:48:58.318343 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:49:10 crc kubenswrapper[5017]: I0129 08:49:10.317373 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:49:10 crc kubenswrapper[5017]: E0129 08:49:10.318541 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:49:22 crc kubenswrapper[5017]: I0129 08:49:22.317221 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:49:22 crc kubenswrapper[5017]: E0129 08:49:22.320008 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:49:35 crc kubenswrapper[5017]: I0129 08:49:35.317198 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:49:35 crc kubenswrapper[5017]: E0129 08:49:35.318336 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:49:48 crc kubenswrapper[5017]: I0129 08:49:48.316137 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:49:48 crc kubenswrapper[5017]: E0129 08:49:48.317436 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:49:57 crc kubenswrapper[5017]: I0129 08:49:57.880579 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sxrr2"] Jan 29 08:49:57 crc kubenswrapper[5017]: I0129 08:49:57.885884 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:57 crc kubenswrapper[5017]: I0129 08:49:57.898800 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxrr2"] Jan 29 08:49:57 crc kubenswrapper[5017]: I0129 08:49:57.949557 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj44n\" (UniqueName: \"kubernetes.io/projected/9966b845-a5bb-40ea-9eb0-cd479628246b-kube-api-access-pj44n\") pod \"community-operators-sxrr2\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:57 crc kubenswrapper[5017]: I0129 08:49:57.949622 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-utilities\") pod \"community-operators-sxrr2\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:57 crc kubenswrapper[5017]: I0129 08:49:57.949766 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-catalog-content\") pod \"community-operators-sxrr2\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:58 crc kubenswrapper[5017]: I0129 08:49:58.052413 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-catalog-content\") pod \"community-operators-sxrr2\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:58 crc kubenswrapper[5017]: I0129 08:49:58.052584 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj44n\" (UniqueName: \"kubernetes.io/projected/9966b845-a5bb-40ea-9eb0-cd479628246b-kube-api-access-pj44n\") pod \"community-operators-sxrr2\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:58 crc kubenswrapper[5017]: I0129 08:49:58.052622 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-utilities\") pod \"community-operators-sxrr2\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:58 crc kubenswrapper[5017]: I0129 08:49:58.053362 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-utilities\") pod \"community-operators-sxrr2\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:58 crc kubenswrapper[5017]: I0129 08:49:58.053578 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-catalog-content\") pod \"community-operators-sxrr2\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:58 crc kubenswrapper[5017]: I0129 08:49:58.081582 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj44n\" (UniqueName: \"kubernetes.io/projected/9966b845-a5bb-40ea-9eb0-cd479628246b-kube-api-access-pj44n\") pod \"community-operators-sxrr2\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:58 crc kubenswrapper[5017]: I0129 08:49:58.239252 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:49:58 crc kubenswrapper[5017]: I0129 08:49:58.835933 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxrr2"] Jan 29 08:49:59 crc kubenswrapper[5017]: I0129 08:49:59.643508 5017 generic.go:334] "Generic (PLEG): container finished" podID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerID="412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3" exitCode=0 Jan 29 08:49:59 crc kubenswrapper[5017]: I0129 08:49:59.643600 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxrr2" event={"ID":"9966b845-a5bb-40ea-9eb0-cd479628246b","Type":"ContainerDied","Data":"412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3"} Jan 29 08:49:59 crc kubenswrapper[5017]: I0129 08:49:59.644091 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxrr2" event={"ID":"9966b845-a5bb-40ea-9eb0-cd479628246b","Type":"ContainerStarted","Data":"6ee3b89af3754385a2ff1a62392363d21be38266d2fae74c4190c346cd9b96a9"} Jan 29 08:50:00 crc kubenswrapper[5017]: I0129 08:50:00.316060 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:50:00 crc kubenswrapper[5017]: I0129 08:50:00.658046 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxrr2" event={"ID":"9966b845-a5bb-40ea-9eb0-cd479628246b","Type":"ContainerStarted","Data":"8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173"} Jan 29 08:50:00 crc kubenswrapper[5017]: I0129 08:50:00.662638 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"e3a246f65d4f8a1699917ba84874081742480fb79e7847e1046e7281f011e7ed"} Jan 29 08:50:04 crc kubenswrapper[5017]: I0129 08:50:04.712624 5017 generic.go:334] "Generic (PLEG): container finished" podID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerID="8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173" exitCode=0 Jan 29 08:50:04 crc kubenswrapper[5017]: I0129 08:50:04.712708 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxrr2" event={"ID":"9966b845-a5bb-40ea-9eb0-cd479628246b","Type":"ContainerDied","Data":"8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173"} Jan 29 08:50:04 crc kubenswrapper[5017]: I0129 08:50:04.717595 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:50:05 crc kubenswrapper[5017]: I0129 08:50:05.728643 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxrr2" event={"ID":"9966b845-a5bb-40ea-9eb0-cd479628246b","Type":"ContainerStarted","Data":"1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a"} Jan 29 08:50:05 crc kubenswrapper[5017]: I0129 08:50:05.765063 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sxrr2" podStartSLOduration=3.27662562 podStartE2EDuration="8.765019407s" podCreationTimestamp="2026-01-29 08:49:57 +0000 UTC" firstStartedPulling="2026-01-29 08:49:59.646391011 +0000 UTC m=+8086.020838621" lastFinishedPulling="2026-01-29 08:50:05.134784798 +0000 UTC m=+8091.509232408" observedRunningTime="2026-01-29 08:50:05.749541209 +0000 UTC m=+8092.123988839" watchObservedRunningTime="2026-01-29 08:50:05.765019407 +0000 UTC m=+8092.139467027" Jan 29 08:50:08 crc kubenswrapper[5017]: I0129 08:50:08.241763 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:50:08 crc kubenswrapper[5017]: I0129 08:50:08.242244 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:50:08 crc kubenswrapper[5017]: I0129 08:50:08.295320 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:50:18 crc kubenswrapper[5017]: I0129 08:50:18.300577 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:50:18 crc kubenswrapper[5017]: I0129 08:50:18.360909 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxrr2"] Jan 29 08:50:18 crc kubenswrapper[5017]: I0129 08:50:18.868275 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sxrr2" podUID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerName="registry-server" containerID="cri-o://1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a" gracePeriod=2 Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.412182 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.493216 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-catalog-content\") pod \"9966b845-a5bb-40ea-9eb0-cd479628246b\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.493378 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-utilities\") pod \"9966b845-a5bb-40ea-9eb0-cd479628246b\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.493521 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj44n\" (UniqueName: \"kubernetes.io/projected/9966b845-a5bb-40ea-9eb0-cd479628246b-kube-api-access-pj44n\") pod \"9966b845-a5bb-40ea-9eb0-cd479628246b\" (UID: \"9966b845-a5bb-40ea-9eb0-cd479628246b\") " Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.494577 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-utilities" (OuterVolumeSpecName: "utilities") pod "9966b845-a5bb-40ea-9eb0-cd479628246b" (UID: "9966b845-a5bb-40ea-9eb0-cd479628246b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.496778 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.505328 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9966b845-a5bb-40ea-9eb0-cd479628246b-kube-api-access-pj44n" (OuterVolumeSpecName: "kube-api-access-pj44n") pod "9966b845-a5bb-40ea-9eb0-cd479628246b" (UID: "9966b845-a5bb-40ea-9eb0-cd479628246b"). InnerVolumeSpecName "kube-api-access-pj44n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.552496 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9966b845-a5bb-40ea-9eb0-cd479628246b" (UID: "9966b845-a5bb-40ea-9eb0-cd479628246b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.600642 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj44n\" (UniqueName: \"kubernetes.io/projected/9966b845-a5bb-40ea-9eb0-cd479628246b-kube-api-access-pj44n\") on node \"crc\" DevicePath \"\"" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.601055 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9966b845-a5bb-40ea-9eb0-cd479628246b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.884788 5017 generic.go:334] "Generic (PLEG): container finished" podID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerID="1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a" exitCode=0 Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.884851 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxrr2" event={"ID":"9966b845-a5bb-40ea-9eb0-cd479628246b","Type":"ContainerDied","Data":"1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a"} Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.884902 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxrr2" event={"ID":"9966b845-a5bb-40ea-9eb0-cd479628246b","Type":"ContainerDied","Data":"6ee3b89af3754385a2ff1a62392363d21be38266d2fae74c4190c346cd9b96a9"} Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.884930 5017 scope.go:117] "RemoveContainer" containerID="1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.884942 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxrr2" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.929721 5017 scope.go:117] "RemoveContainer" containerID="8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173" Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.939103 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxrr2"] Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.955668 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sxrr2"] Jan 29 08:50:19 crc kubenswrapper[5017]: I0129 08:50:19.963870 5017 scope.go:117] "RemoveContainer" containerID="412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3" Jan 29 08:50:20 crc kubenswrapper[5017]: I0129 08:50:20.016015 5017 scope.go:117] "RemoveContainer" containerID="1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a" Jan 29 08:50:20 crc kubenswrapper[5017]: E0129 08:50:20.016638 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a\": container with ID starting with 1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a not found: ID does not exist" containerID="1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a" Jan 29 08:50:20 crc kubenswrapper[5017]: I0129 08:50:20.016680 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a"} err="failed to get container status \"1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a\": rpc error: code = NotFound desc = could not find container \"1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a\": container with ID starting with 1bbbcd71ccb8adddb7279b02196f2f5917235a086eec3b2c91383fc7c276239a not found: ID does not exist" Jan 29 08:50:20 crc kubenswrapper[5017]: I0129 08:50:20.016710 5017 scope.go:117] "RemoveContainer" containerID="8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173" Jan 29 08:50:20 crc kubenswrapper[5017]: E0129 08:50:20.019335 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173\": container with ID starting with 8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173 not found: ID does not exist" containerID="8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173" Jan 29 08:50:20 crc kubenswrapper[5017]: I0129 08:50:20.019367 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173"} err="failed to get container status \"8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173\": rpc error: code = NotFound desc = could not find container \"8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173\": container with ID starting with 8839625e306c25556156a5f9d60a83eaf89fc51b3bb38da61ea392105adef173 not found: ID does not exist" Jan 29 08:50:20 crc kubenswrapper[5017]: I0129 08:50:20.019385 5017 scope.go:117] "RemoveContainer" containerID="412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3" Jan 29 08:50:20 crc kubenswrapper[5017]: E0129 08:50:20.020554 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3\": container with ID starting with 412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3 not found: ID does not exist" containerID="412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3" Jan 29 08:50:20 crc kubenswrapper[5017]: I0129 08:50:20.020619 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3"} err="failed to get container status \"412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3\": rpc error: code = NotFound desc = could not find container \"412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3\": container with ID starting with 412e145130a4ab16948464bc83253d96283604ae9b58263c19124281a76ceea3 not found: ID does not exist" Jan 29 08:50:20 crc kubenswrapper[5017]: I0129 08:50:20.329101 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9966b845-a5bb-40ea-9eb0-cd479628246b" path="/var/lib/kubelet/pods/9966b845-a5bb-40ea-9eb0-cd479628246b/volumes" Jan 29 08:51:05 crc kubenswrapper[5017]: I0129 08:51:05.592219 5017 generic.go:334] "Generic (PLEG): container finished" podID="36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" containerID="7e5724aaa4c1f708fdd83faa2e8c017f73f022fb3b9953dd2a4c7f34fe77f4c6" exitCode=0 Jan 29 08:51:05 crc kubenswrapper[5017]: I0129 08:51:05.592300 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" event={"ID":"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09","Type":"ContainerDied","Data":"7e5724aaa4c1f708fdd83faa2e8c017f73f022fb3b9953dd2a4c7f34fe77f4c6"} Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.159537 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.232142 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-0\") pod \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.232352 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceph\") pod \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.232478 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zcp4\" (UniqueName: \"kubernetes.io/projected/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-kube-api-access-9zcp4\") pod \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.232563 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-1\") pod \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.232668 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ssh-key-openstack-cell1\") pod \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.232747 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-inventory\") pod \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.232881 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-telemetry-combined-ca-bundle\") pod \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.232925 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-2\") pod \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\" (UID: \"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09\") " Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.239316 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-kube-api-access-9zcp4" (OuterVolumeSpecName: "kube-api-access-9zcp4") pod "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" (UID: "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09"). InnerVolumeSpecName "kube-api-access-9zcp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.240214 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" (UID: "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.255261 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceph" (OuterVolumeSpecName: "ceph") pod "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" (UID: "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.268350 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" (UID: "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.274465 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" (UID: "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.276290 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" (UID: "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.278439 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-inventory" (OuterVolumeSpecName: "inventory") pod "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" (UID: "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.283945 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" (UID: "36d2e4dd-7fea-48d6-92f9-93f3e02c0e09"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.336153 5017 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.336194 5017 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.336207 5017 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.336217 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.336228 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zcp4\" (UniqueName: \"kubernetes.io/projected/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-kube-api-access-9zcp4\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.336239 5017 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.336250 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.336259 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d2e4dd-7fea-48d6-92f9-93f3e02c0e09-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.617772 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" event={"ID":"36d2e4dd-7fea-48d6-92f9-93f3e02c0e09","Type":"ContainerDied","Data":"536abefc0983e18e3dd4e142e8b96f73e86651ad6db7b8532fd0e7d8c6c153e5"} Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.617825 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="536abefc0983e18e3dd4e142e8b96f73e86651ad6db7b8532fd0e7d8c6c153e5" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.617859 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-tnvqv" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.746700 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-bmqnc"] Jan 29 08:51:07 crc kubenswrapper[5017]: E0129 08:51:07.747621 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" containerName="telemetry-openstack-openstack-cell1" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.747640 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" containerName="telemetry-openstack-openstack-cell1" Jan 29 08:51:07 crc kubenswrapper[5017]: E0129 08:51:07.747665 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerName="extract-content" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.747673 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerName="extract-content" Jan 29 08:51:07 crc kubenswrapper[5017]: E0129 08:51:07.747681 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerName="registry-server" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.747688 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerName="registry-server" Jan 29 08:51:07 crc kubenswrapper[5017]: E0129 08:51:07.747710 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerName="extract-utilities" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.747716 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerName="extract-utilities" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.747931 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="9966b845-a5bb-40ea-9eb0-cd479628246b" containerName="registry-server" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.747945 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d2e4dd-7fea-48d6-92f9-93f3e02c0e09" containerName="telemetry-openstack-openstack-cell1" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.748828 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.752758 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.753054 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.753593 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.753730 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.755453 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.760071 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-bmqnc"] Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.847476 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvlq\" (UniqueName: \"kubernetes.io/projected/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-kube-api-access-lkvlq\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.847689 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.847745 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.847852 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.847899 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.847931 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.950592 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvlq\" (UniqueName: \"kubernetes.io/projected/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-kube-api-access-lkvlq\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.950660 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.950729 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.950823 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.950893 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.950933 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.957277 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.958158 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.958361 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.958602 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.959520 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:07 crc kubenswrapper[5017]: I0129 08:51:07.974484 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvlq\" (UniqueName: \"kubernetes.io/projected/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-kube-api-access-lkvlq\") pod \"neutron-sriov-openstack-openstack-cell1-bmqnc\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:08 crc kubenswrapper[5017]: I0129 08:51:08.068140 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:51:09 crc kubenswrapper[5017]: I0129 08:51:08.674804 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-bmqnc"] Jan 29 08:51:09 crc kubenswrapper[5017]: I0129 08:51:09.648269 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" event={"ID":"75a2a730-ea79-4e39-a0ca-eb1c8fac88df","Type":"ContainerStarted","Data":"fa1dd87e5e03de53740bda1c90e590abb07d2b23c3547afb9248762615b5e0a8"} Jan 29 08:51:09 crc kubenswrapper[5017]: I0129 08:51:09.649453 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" event={"ID":"75a2a730-ea79-4e39-a0ca-eb1c8fac88df","Type":"ContainerStarted","Data":"d6d88ea60895b342a0db9cc1c832a57fc86e18fb3375d61f807c2c67b9183246"} Jan 29 08:51:09 crc kubenswrapper[5017]: I0129 08:51:09.670692 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" podStartSLOduration=2.047581456 podStartE2EDuration="2.67065712s" podCreationTimestamp="2026-01-29 08:51:07 +0000 UTC" firstStartedPulling="2026-01-29 08:51:08.692861489 +0000 UTC m=+8155.067309089" lastFinishedPulling="2026-01-29 08:51:09.315937143 +0000 UTC m=+8155.690384753" observedRunningTime="2026-01-29 08:51:09.665089334 +0000 UTC m=+8156.039536974" watchObservedRunningTime="2026-01-29 08:51:09.67065712 +0000 UTC m=+8156.045104730" Jan 29 08:51:20 crc kubenswrapper[5017]: I0129 08:51:20.932909 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-84sz9"] Jan 29 08:51:20 crc kubenswrapper[5017]: I0129 08:51:20.940886 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:20 crc kubenswrapper[5017]: I0129 08:51:20.955503 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84sz9"] Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.085705 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956f2\" (UniqueName: \"kubernetes.io/projected/8a814ae7-7b6c-484b-b220-d5f2f3596d53-kube-api-access-956f2\") pod \"redhat-operators-84sz9\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.085864 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-catalog-content\") pod \"redhat-operators-84sz9\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.085935 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-utilities\") pod \"redhat-operators-84sz9\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.188484 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-catalog-content\") pod \"redhat-operators-84sz9\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.188910 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-utilities\") pod \"redhat-operators-84sz9\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.189154 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956f2\" (UniqueName: \"kubernetes.io/projected/8a814ae7-7b6c-484b-b220-d5f2f3596d53-kube-api-access-956f2\") pod \"redhat-operators-84sz9\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.189295 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-catalog-content\") pod \"redhat-operators-84sz9\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.189325 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-utilities\") pod \"redhat-operators-84sz9\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.211988 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956f2\" (UniqueName: \"kubernetes.io/projected/8a814ae7-7b6c-484b-b220-d5f2f3596d53-kube-api-access-956f2\") pod \"redhat-operators-84sz9\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.274352 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.492332 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4krnt"] Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.496488 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.542814 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4krnt"] Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.599478 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-catalog-content\") pod \"certified-operators-4krnt\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.599644 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-utilities\") pod \"certified-operators-4krnt\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.599702 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsbcf\" (UniqueName: \"kubernetes.io/projected/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-kube-api-access-fsbcf\") pod \"certified-operators-4krnt\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.702082 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsbcf\" (UniqueName: \"kubernetes.io/projected/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-kube-api-access-fsbcf\") pod \"certified-operators-4krnt\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.702189 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-catalog-content\") pod \"certified-operators-4krnt\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.702308 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-utilities\") pod \"certified-operators-4krnt\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.703019 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-utilities\") pod \"certified-operators-4krnt\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.703427 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-catalog-content\") pod \"certified-operators-4krnt\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.725082 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsbcf\" (UniqueName: \"kubernetes.io/projected/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-kube-api-access-fsbcf\") pod \"certified-operators-4krnt\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.833813 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:21 crc kubenswrapper[5017]: I0129 08:51:21.839547 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84sz9"] Jan 29 08:51:22 crc kubenswrapper[5017]: I0129 08:51:22.416255 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4krnt"] Jan 29 08:51:22 crc kubenswrapper[5017]: W0129 08:51:22.420176 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod705ffb5c_1bcb_4d3b_b066_0a9086c10e42.slice/crio-bb90be0150c3d95406aeab18368273d411021f4fc65820a196d6852a569f4d12 WatchSource:0}: Error finding container bb90be0150c3d95406aeab18368273d411021f4fc65820a196d6852a569f4d12: Status 404 returned error can't find the container with id bb90be0150c3d95406aeab18368273d411021f4fc65820a196d6852a569f4d12 Jan 29 08:51:22 crc kubenswrapper[5017]: I0129 08:51:22.826740 5017 generic.go:334] "Generic (PLEG): container finished" podID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerID="7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7" exitCode=0 Jan 29 08:51:22 crc kubenswrapper[5017]: I0129 08:51:22.826833 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4krnt" event={"ID":"705ffb5c-1bcb-4d3b-b066-0a9086c10e42","Type":"ContainerDied","Data":"7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7"} Jan 29 08:51:22 crc kubenswrapper[5017]: I0129 08:51:22.826869 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4krnt" event={"ID":"705ffb5c-1bcb-4d3b-b066-0a9086c10e42","Type":"ContainerStarted","Data":"bb90be0150c3d95406aeab18368273d411021f4fc65820a196d6852a569f4d12"} Jan 29 08:51:22 crc kubenswrapper[5017]: I0129 08:51:22.844283 5017 generic.go:334] "Generic (PLEG): container finished" podID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerID="82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d" exitCode=0 Jan 29 08:51:22 crc kubenswrapper[5017]: I0129 08:51:22.844425 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84sz9" event={"ID":"8a814ae7-7b6c-484b-b220-d5f2f3596d53","Type":"ContainerDied","Data":"82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d"} Jan 29 08:51:22 crc kubenswrapper[5017]: I0129 08:51:22.844878 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84sz9" event={"ID":"8a814ae7-7b6c-484b-b220-d5f2f3596d53","Type":"ContainerStarted","Data":"75cdecc5345793a6765e58c3baea928e1818f5e2f88925a83c5a4a63c596995f"} Jan 29 08:51:23 crc kubenswrapper[5017]: I0129 08:51:23.890285 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jpqfb"] Jan 29 08:51:23 crc kubenswrapper[5017]: I0129 08:51:23.900599 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:23 crc kubenswrapper[5017]: I0129 08:51:23.900901 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84sz9" event={"ID":"8a814ae7-7b6c-484b-b220-d5f2f3596d53","Type":"ContainerStarted","Data":"a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1"} Jan 29 08:51:23 crc kubenswrapper[5017]: I0129 08:51:23.905442 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4krnt" event={"ID":"705ffb5c-1bcb-4d3b-b066-0a9086c10e42","Type":"ContainerStarted","Data":"babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7"} Jan 29 08:51:23 crc kubenswrapper[5017]: I0129 08:51:23.948163 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpqfb"] Jan 29 08:51:23 crc kubenswrapper[5017]: I0129 08:51:23.970150 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4qn\" (UniqueName: \"kubernetes.io/projected/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-kube-api-access-cv4qn\") pod \"redhat-marketplace-jpqfb\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:23 crc kubenswrapper[5017]: I0129 08:51:23.970261 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-catalog-content\") pod \"redhat-marketplace-jpqfb\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:23 crc kubenswrapper[5017]: I0129 08:51:23.970906 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-utilities\") pod \"redhat-marketplace-jpqfb\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:24 crc kubenswrapper[5017]: I0129 08:51:24.085333 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4qn\" (UniqueName: \"kubernetes.io/projected/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-kube-api-access-cv4qn\") pod \"redhat-marketplace-jpqfb\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:24 crc kubenswrapper[5017]: I0129 08:51:24.085416 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-catalog-content\") pod \"redhat-marketplace-jpqfb\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:24 crc kubenswrapper[5017]: I0129 08:51:24.085521 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-utilities\") pod \"redhat-marketplace-jpqfb\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:24 crc kubenswrapper[5017]: I0129 08:51:24.086305 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-catalog-content\") pod \"redhat-marketplace-jpqfb\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:24 crc kubenswrapper[5017]: I0129 08:51:24.086391 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-utilities\") pod \"redhat-marketplace-jpqfb\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:24 crc kubenswrapper[5017]: I0129 08:51:24.109665 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4qn\" (UniqueName: \"kubernetes.io/projected/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-kube-api-access-cv4qn\") pod \"redhat-marketplace-jpqfb\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:24 crc kubenswrapper[5017]: I0129 08:51:24.237645 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:24 crc kubenswrapper[5017]: I0129 08:51:24.810496 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpqfb"] Jan 29 08:51:24 crc kubenswrapper[5017]: W0129 08:51:24.821495 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd17bbb9_84c5_445c_978d_a4ea3b720c9c.slice/crio-7c546040ea92c5348df3f1c2606d7ae37909e380c92273ab23108c030e432a13 WatchSource:0}: Error finding container 7c546040ea92c5348df3f1c2606d7ae37909e380c92273ab23108c030e432a13: Status 404 returned error can't find the container with id 7c546040ea92c5348df3f1c2606d7ae37909e380c92273ab23108c030e432a13 Jan 29 08:51:24 crc kubenswrapper[5017]: I0129 08:51:24.921913 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpqfb" event={"ID":"cd17bbb9-84c5-445c-978d-a4ea3b720c9c","Type":"ContainerStarted","Data":"7c546040ea92c5348df3f1c2606d7ae37909e380c92273ab23108c030e432a13"} Jan 29 08:51:25 crc kubenswrapper[5017]: I0129 08:51:25.934271 5017 generic.go:334] "Generic (PLEG): container finished" podID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerID="babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7" exitCode=0 Jan 29 08:51:25 crc kubenswrapper[5017]: I0129 08:51:25.935235 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4krnt" event={"ID":"705ffb5c-1bcb-4d3b-b066-0a9086c10e42","Type":"ContainerDied","Data":"babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7"} Jan 29 08:51:25 crc kubenswrapper[5017]: I0129 08:51:25.942019 5017 generic.go:334] "Generic (PLEG): container finished" podID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerID="6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40" exitCode=0 Jan 29 08:51:25 crc kubenswrapper[5017]: I0129 08:51:25.942082 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpqfb" event={"ID":"cd17bbb9-84c5-445c-978d-a4ea3b720c9c","Type":"ContainerDied","Data":"6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40"} Jan 29 08:51:27 crc kubenswrapper[5017]: I0129 08:51:27.971167 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4krnt" event={"ID":"705ffb5c-1bcb-4d3b-b066-0a9086c10e42","Type":"ContainerStarted","Data":"9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f"} Jan 29 08:51:27 crc kubenswrapper[5017]: I0129 08:51:27.978139 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpqfb" event={"ID":"cd17bbb9-84c5-445c-978d-a4ea3b720c9c","Type":"ContainerStarted","Data":"4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f"} Jan 29 08:51:28 crc kubenswrapper[5017]: I0129 08:51:28.002132 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4krnt" podStartSLOduration=3.216906746 podStartE2EDuration="7.002102811s" podCreationTimestamp="2026-01-29 08:51:21 +0000 UTC" firstStartedPulling="2026-01-29 08:51:22.830321597 +0000 UTC m=+8169.204769207" lastFinishedPulling="2026-01-29 08:51:26.615517662 +0000 UTC m=+8172.989965272" observedRunningTime="2026-01-29 08:51:27.995683324 +0000 UTC m=+8174.370130934" watchObservedRunningTime="2026-01-29 08:51:28.002102811 +0000 UTC m=+8174.376550421" Jan 29 08:51:31 crc kubenswrapper[5017]: I0129 08:51:31.012129 5017 generic.go:334] "Generic (PLEG): container finished" podID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerID="4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f" exitCode=0 Jan 29 08:51:31 crc kubenswrapper[5017]: I0129 08:51:31.012236 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpqfb" event={"ID":"cd17bbb9-84c5-445c-978d-a4ea3b720c9c","Type":"ContainerDied","Data":"4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f"} Jan 29 08:51:31 crc kubenswrapper[5017]: I0129 08:51:31.835360 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:31 crc kubenswrapper[5017]: I0129 08:51:31.837066 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:32 crc kubenswrapper[5017]: I0129 08:51:32.028293 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpqfb" event={"ID":"cd17bbb9-84c5-445c-978d-a4ea3b720c9c","Type":"ContainerStarted","Data":"4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507"} Jan 29 08:51:32 crc kubenswrapper[5017]: I0129 08:51:32.058593 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jpqfb" podStartSLOduration=3.368034566 podStartE2EDuration="9.058567835s" podCreationTimestamp="2026-01-29 08:51:23 +0000 UTC" firstStartedPulling="2026-01-29 08:51:25.944110947 +0000 UTC m=+8172.318558557" lastFinishedPulling="2026-01-29 08:51:31.634644216 +0000 UTC m=+8178.009091826" observedRunningTime="2026-01-29 08:51:32.05348648 +0000 UTC m=+8178.427934120" watchObservedRunningTime="2026-01-29 08:51:32.058567835 +0000 UTC m=+8178.433015445" Jan 29 08:51:32 crc kubenswrapper[5017]: I0129 08:51:32.898532 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4krnt" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="registry-server" probeResult="failure" output=< Jan 29 08:51:32 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:51:32 crc kubenswrapper[5017]: > Jan 29 08:51:33 crc kubenswrapper[5017]: I0129 08:51:33.041189 5017 generic.go:334] "Generic (PLEG): container finished" podID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerID="a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1" exitCode=0 Jan 29 08:51:33 crc kubenswrapper[5017]: I0129 08:51:33.041241 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84sz9" event={"ID":"8a814ae7-7b6c-484b-b220-d5f2f3596d53","Type":"ContainerDied","Data":"a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1"} Jan 29 08:51:34 crc kubenswrapper[5017]: I0129 08:51:34.055753 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84sz9" event={"ID":"8a814ae7-7b6c-484b-b220-d5f2f3596d53","Type":"ContainerStarted","Data":"7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6"} Jan 29 08:51:34 crc kubenswrapper[5017]: I0129 08:51:34.084249 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-84sz9" podStartSLOduration=3.418324143 podStartE2EDuration="14.084222258s" podCreationTimestamp="2026-01-29 08:51:20 +0000 UTC" firstStartedPulling="2026-01-29 08:51:22.847038935 +0000 UTC m=+8169.221486535" lastFinishedPulling="2026-01-29 08:51:33.51293704 +0000 UTC m=+8179.887384650" observedRunningTime="2026-01-29 08:51:34.080144819 +0000 UTC m=+8180.454592449" watchObservedRunningTime="2026-01-29 08:51:34.084222258 +0000 UTC m=+8180.458669868" Jan 29 08:51:34 crc kubenswrapper[5017]: I0129 08:51:34.238139 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:34 crc kubenswrapper[5017]: I0129 08:51:34.238200 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:35 crc kubenswrapper[5017]: I0129 08:51:35.298779 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jpqfb" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerName="registry-server" probeResult="failure" output=< Jan 29 08:51:35 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:51:35 crc kubenswrapper[5017]: > Jan 29 08:51:41 crc kubenswrapper[5017]: I0129 08:51:41.275268 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:41 crc kubenswrapper[5017]: I0129 08:51:41.276214 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:51:42 crc kubenswrapper[5017]: I0129 08:51:42.341406 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-84sz9" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="registry-server" probeResult="failure" output=< Jan 29 08:51:42 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:51:42 crc kubenswrapper[5017]: > Jan 29 08:51:42 crc kubenswrapper[5017]: I0129 08:51:42.893175 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4krnt" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="registry-server" probeResult="failure" output=< Jan 29 08:51:42 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:51:42 crc kubenswrapper[5017]: > Jan 29 08:51:44 crc kubenswrapper[5017]: I0129 08:51:44.294705 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:44 crc kubenswrapper[5017]: I0129 08:51:44.357471 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:44 crc kubenswrapper[5017]: I0129 08:51:44.538035 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpqfb"] Jan 29 08:51:46 crc kubenswrapper[5017]: I0129 08:51:46.181670 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jpqfb" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerName="registry-server" containerID="cri-o://4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507" gracePeriod=2 Jan 29 08:51:46 crc kubenswrapper[5017]: I0129 08:51:46.803687 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:46 crc kubenswrapper[5017]: I0129 08:51:46.930544 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-catalog-content\") pod \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " Jan 29 08:51:46 crc kubenswrapper[5017]: I0129 08:51:46.930781 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv4qn\" (UniqueName: \"kubernetes.io/projected/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-kube-api-access-cv4qn\") pod \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " Jan 29 08:51:46 crc kubenswrapper[5017]: I0129 08:51:46.930877 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-utilities\") pod \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\" (UID: \"cd17bbb9-84c5-445c-978d-a4ea3b720c9c\") " Jan 29 08:51:46 crc kubenswrapper[5017]: I0129 08:51:46.933000 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-utilities" (OuterVolumeSpecName: "utilities") pod "cd17bbb9-84c5-445c-978d-a4ea3b720c9c" (UID: "cd17bbb9-84c5-445c-978d-a4ea3b720c9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:51:46 crc kubenswrapper[5017]: I0129 08:51:46.954516 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-kube-api-access-cv4qn" (OuterVolumeSpecName: "kube-api-access-cv4qn") pod "cd17bbb9-84c5-445c-978d-a4ea3b720c9c" (UID: "cd17bbb9-84c5-445c-978d-a4ea3b720c9c"). InnerVolumeSpecName "kube-api-access-cv4qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:51:46 crc kubenswrapper[5017]: I0129 08:51:46.961651 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd17bbb9-84c5-445c-978d-a4ea3b720c9c" (UID: "cd17bbb9-84c5-445c-978d-a4ea3b720c9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.033469 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.033528 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv4qn\" (UniqueName: \"kubernetes.io/projected/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-kube-api-access-cv4qn\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.033541 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd17bbb9-84c5-445c-978d-a4ea3b720c9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.195527 5017 generic.go:334] "Generic (PLEG): container finished" podID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerID="4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507" exitCode=0 Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.195592 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpqfb" event={"ID":"cd17bbb9-84c5-445c-978d-a4ea3b720c9c","Type":"ContainerDied","Data":"4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507"} Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.195609 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpqfb" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.195627 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpqfb" event={"ID":"cd17bbb9-84c5-445c-978d-a4ea3b720c9c","Type":"ContainerDied","Data":"7c546040ea92c5348df3f1c2606d7ae37909e380c92273ab23108c030e432a13"} Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.195684 5017 scope.go:117] "RemoveContainer" containerID="4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.255286 5017 scope.go:117] "RemoveContainer" containerID="4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.262982 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpqfb"] Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.279671 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpqfb"] Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.287195 5017 scope.go:117] "RemoveContainer" containerID="6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.335013 5017 scope.go:117] "RemoveContainer" containerID="4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507" Jan 29 08:51:47 crc kubenswrapper[5017]: E0129 08:51:47.339677 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507\": container with ID starting with 4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507 not found: ID does not exist" containerID="4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.339748 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507"} err="failed to get container status \"4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507\": rpc error: code = NotFound desc = could not find container \"4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507\": container with ID starting with 4b18b6a47034c272b75687ed91128333fe0009370bbbcb4ba6fd744d8943b507 not found: ID does not exist" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.339783 5017 scope.go:117] "RemoveContainer" containerID="4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f" Jan 29 08:51:47 crc kubenswrapper[5017]: E0129 08:51:47.340392 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f\": container with ID starting with 4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f not found: ID does not exist" containerID="4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.340468 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f"} err="failed to get container status \"4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f\": rpc error: code = NotFound desc = could not find container \"4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f\": container with ID starting with 4a44b8aa7fb4a1ebf289c36f72234b2cfd2faaca0b1599f83e236e8d3efed09f not found: ID does not exist" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.340509 5017 scope.go:117] "RemoveContainer" containerID="6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40" Jan 29 08:51:47 crc kubenswrapper[5017]: E0129 08:51:47.342662 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40\": container with ID starting with 6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40 not found: ID does not exist" containerID="6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40" Jan 29 08:51:47 crc kubenswrapper[5017]: I0129 08:51:47.342699 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40"} err="failed to get container status \"6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40\": rpc error: code = NotFound desc = could not find container \"6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40\": container with ID starting with 6e603880a190e53da6ad9c7f05d25e2ecf0524355bb5b7ab1be71ce59fcd7d40 not found: ID does not exist" Jan 29 08:51:48 crc kubenswrapper[5017]: I0129 08:51:48.331658 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" path="/var/lib/kubelet/pods/cd17bbb9-84c5-445c-978d-a4ea3b720c9c/volumes" Jan 29 08:51:51 crc kubenswrapper[5017]: I0129 08:51:51.899824 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:51 crc kubenswrapper[5017]: I0129 08:51:51.961193 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:52 crc kubenswrapper[5017]: I0129 08:51:52.325540 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-84sz9" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="registry-server" probeResult="failure" output=< Jan 29 08:51:52 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:51:52 crc kubenswrapper[5017]: > Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.086250 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4krnt"] Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.273066 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4krnt" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="registry-server" containerID="cri-o://9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f" gracePeriod=2 Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.839891 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.913669 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsbcf\" (UniqueName: \"kubernetes.io/projected/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-kube-api-access-fsbcf\") pod \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.913820 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-utilities\") pod \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.913878 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-catalog-content\") pod \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\" (UID: \"705ffb5c-1bcb-4d3b-b066-0a9086c10e42\") " Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.914881 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-utilities" (OuterVolumeSpecName: "utilities") pod "705ffb5c-1bcb-4d3b-b066-0a9086c10e42" (UID: "705ffb5c-1bcb-4d3b-b066-0a9086c10e42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.917541 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.924392 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-kube-api-access-fsbcf" (OuterVolumeSpecName: "kube-api-access-fsbcf") pod "705ffb5c-1bcb-4d3b-b066-0a9086c10e42" (UID: "705ffb5c-1bcb-4d3b-b066-0a9086c10e42"). InnerVolumeSpecName "kube-api-access-fsbcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:51:53 crc kubenswrapper[5017]: I0129 08:51:53.968505 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "705ffb5c-1bcb-4d3b-b066-0a9086c10e42" (UID: "705ffb5c-1bcb-4d3b-b066-0a9086c10e42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.020041 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsbcf\" (UniqueName: \"kubernetes.io/projected/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-kube-api-access-fsbcf\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.020081 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/705ffb5c-1bcb-4d3b-b066-0a9086c10e42-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.286787 5017 generic.go:334] "Generic (PLEG): container finished" podID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerID="9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f" exitCode=0 Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.286848 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4krnt" event={"ID":"705ffb5c-1bcb-4d3b-b066-0a9086c10e42","Type":"ContainerDied","Data":"9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f"} Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.286889 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4krnt" event={"ID":"705ffb5c-1bcb-4d3b-b066-0a9086c10e42","Type":"ContainerDied","Data":"bb90be0150c3d95406aeab18368273d411021f4fc65820a196d6852a569f4d12"} Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.286909 5017 scope.go:117] "RemoveContainer" containerID="9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.286910 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4krnt" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.325751 5017 scope.go:117] "RemoveContainer" containerID="babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.337495 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4krnt"] Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.342480 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4krnt"] Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.353790 5017 scope.go:117] "RemoveContainer" containerID="7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.403401 5017 scope.go:117] "RemoveContainer" containerID="9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f" Jan 29 08:51:54 crc kubenswrapper[5017]: E0129 08:51:54.404130 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f\": container with ID starting with 9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f not found: ID does not exist" containerID="9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.404200 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f"} err="failed to get container status \"9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f\": rpc error: code = NotFound desc = could not find container \"9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f\": container with ID starting with 9ba976f443ae224f1d72bf9e388041eb30cd9dd731b5c835d624885e0e016e3f not found: ID does not exist" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.404253 5017 scope.go:117] "RemoveContainer" containerID="babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7" Jan 29 08:51:54 crc kubenswrapper[5017]: E0129 08:51:54.404804 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7\": container with ID starting with babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7 not found: ID does not exist" containerID="babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.404864 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7"} err="failed to get container status \"babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7\": rpc error: code = NotFound desc = could not find container \"babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7\": container with ID starting with babbe45bfa41469ff92be32abf02963bed97b7742a89175a2c985f25b89af8e7 not found: ID does not exist" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.404885 5017 scope.go:117] "RemoveContainer" containerID="7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7" Jan 29 08:51:54 crc kubenswrapper[5017]: E0129 08:51:54.405248 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7\": container with ID starting with 7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7 not found: ID does not exist" containerID="7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7" Jan 29 08:51:54 crc kubenswrapper[5017]: I0129 08:51:54.405301 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7"} err="failed to get container status \"7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7\": rpc error: code = NotFound desc = could not find container \"7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7\": container with ID starting with 7d90c603a396a15cc0dc4b61e20f61be15abee50a1244d58b7034e802b4167b7 not found: ID does not exist" Jan 29 08:51:56 crc kubenswrapper[5017]: I0129 08:51:56.329400 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" path="/var/lib/kubelet/pods/705ffb5c-1bcb-4d3b-b066-0a9086c10e42/volumes" Jan 29 08:52:02 crc kubenswrapper[5017]: I0129 08:52:02.323816 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-84sz9" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="registry-server" probeResult="failure" output=< Jan 29 08:52:02 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 08:52:02 crc kubenswrapper[5017]: > Jan 29 08:52:11 crc kubenswrapper[5017]: I0129 08:52:11.331500 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:52:11 crc kubenswrapper[5017]: I0129 08:52:11.382340 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:52:11 crc kubenswrapper[5017]: I0129 08:52:11.571803 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84sz9"] Jan 29 08:52:12 crc kubenswrapper[5017]: I0129 08:52:12.493533 5017 generic.go:334] "Generic (PLEG): container finished" podID="75a2a730-ea79-4e39-a0ca-eb1c8fac88df" containerID="fa1dd87e5e03de53740bda1c90e590abb07d2b23c3547afb9248762615b5e0a8" exitCode=0 Jan 29 08:52:12 crc kubenswrapper[5017]: I0129 08:52:12.493633 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" event={"ID":"75a2a730-ea79-4e39-a0ca-eb1c8fac88df","Type":"ContainerDied","Data":"fa1dd87e5e03de53740bda1c90e590abb07d2b23c3547afb9248762615b5e0a8"} Jan 29 08:52:12 crc kubenswrapper[5017]: I0129 08:52:12.494543 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-84sz9" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="registry-server" containerID="cri-o://7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6" gracePeriod=2 Jan 29 08:52:12 crc kubenswrapper[5017]: E0129 08:52:12.714225 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a814ae7_7b6c_484b_b220_d5f2f3596d53.slice/crio-conmon-7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a814ae7_7b6c_484b_b220_d5f2f3596d53.slice/crio-7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.188393 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.293869 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-catalog-content\") pod \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.293970 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-956f2\" (UniqueName: \"kubernetes.io/projected/8a814ae7-7b6c-484b-b220-d5f2f3596d53-kube-api-access-956f2\") pod \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.294149 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-utilities\") pod \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\" (UID: \"8a814ae7-7b6c-484b-b220-d5f2f3596d53\") " Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.295526 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-utilities" (OuterVolumeSpecName: "utilities") pod "8a814ae7-7b6c-484b-b220-d5f2f3596d53" (UID: "8a814ae7-7b6c-484b-b220-d5f2f3596d53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.302820 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a814ae7-7b6c-484b-b220-d5f2f3596d53-kube-api-access-956f2" (OuterVolumeSpecName: "kube-api-access-956f2") pod "8a814ae7-7b6c-484b-b220-d5f2f3596d53" (UID: "8a814ae7-7b6c-484b-b220-d5f2f3596d53"). InnerVolumeSpecName "kube-api-access-956f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.397410 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.397452 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-956f2\" (UniqueName: \"kubernetes.io/projected/8a814ae7-7b6c-484b-b220-d5f2f3596d53-kube-api-access-956f2\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.444354 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a814ae7-7b6c-484b-b220-d5f2f3596d53" (UID: "8a814ae7-7b6c-484b-b220-d5f2f3596d53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.500017 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a814ae7-7b6c-484b-b220-d5f2f3596d53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.510592 5017 generic.go:334] "Generic (PLEG): container finished" podID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerID="7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6" exitCode=0 Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.510709 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84sz9" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.510742 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84sz9" event={"ID":"8a814ae7-7b6c-484b-b220-d5f2f3596d53","Type":"ContainerDied","Data":"7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6"} Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.510809 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84sz9" event={"ID":"8a814ae7-7b6c-484b-b220-d5f2f3596d53","Type":"ContainerDied","Data":"75cdecc5345793a6765e58c3baea928e1818f5e2f88925a83c5a4a63c596995f"} Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.510831 5017 scope.go:117] "RemoveContainer" containerID="7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.565565 5017 scope.go:117] "RemoveContainer" containerID="a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.574773 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84sz9"] Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.586558 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-84sz9"] Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.614386 5017 scope.go:117] "RemoveContainer" containerID="82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.682863 5017 scope.go:117] "RemoveContainer" containerID="7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6" Jan 29 08:52:13 crc kubenswrapper[5017]: E0129 08:52:13.683660 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6\": container with ID starting with 7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6 not found: ID does not exist" containerID="7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.683720 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6"} err="failed to get container status \"7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6\": rpc error: code = NotFound desc = could not find container \"7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6\": container with ID starting with 7209166eb0e20d57f28f54dd572817b1f14f58f633224384ad57ec40f1d320f6 not found: ID does not exist" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.683752 5017 scope.go:117] "RemoveContainer" containerID="a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1" Jan 29 08:52:13 crc kubenswrapper[5017]: E0129 08:52:13.684203 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1\": container with ID starting with a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1 not found: ID does not exist" containerID="a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.686147 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1"} err="failed to get container status \"a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1\": rpc error: code = NotFound desc = could not find container \"a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1\": container with ID starting with a35121e278efc624e1f9eab76d97ac9dd4856e7dfc9c83eca5f82a56ac8fa3a1 not found: ID does not exist" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.686259 5017 scope.go:117] "RemoveContainer" containerID="82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d" Jan 29 08:52:13 crc kubenswrapper[5017]: E0129 08:52:13.687049 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d\": container with ID starting with 82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d not found: ID does not exist" containerID="82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d" Jan 29 08:52:13 crc kubenswrapper[5017]: I0129 08:52:13.687089 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d"} err="failed to get container status \"82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d\": rpc error: code = NotFound desc = could not find container \"82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d\": container with ID starting with 82bb13417203f470f35239c637f6e38cfde0a69feec3c0d107a989e0b293082d not found: ID does not exist" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.082825 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.113423 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-inventory\") pod \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.113569 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-agent-neutron-config-0\") pod \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.114659 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkvlq\" (UniqueName: \"kubernetes.io/projected/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-kube-api-access-lkvlq\") pod \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.114816 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-combined-ca-bundle\") pod \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.114876 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ssh-key-openstack-cell1\") pod \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.114928 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ceph\") pod \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\" (UID: \"75a2a730-ea79-4e39-a0ca-eb1c8fac88df\") " Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.121013 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ceph" (OuterVolumeSpecName: "ceph") pod "75a2a730-ea79-4e39-a0ca-eb1c8fac88df" (UID: "75a2a730-ea79-4e39-a0ca-eb1c8fac88df"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.121719 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-kube-api-access-lkvlq" (OuterVolumeSpecName: "kube-api-access-lkvlq") pod "75a2a730-ea79-4e39-a0ca-eb1c8fac88df" (UID: "75a2a730-ea79-4e39-a0ca-eb1c8fac88df"). InnerVolumeSpecName "kube-api-access-lkvlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.127409 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "75a2a730-ea79-4e39-a0ca-eb1c8fac88df" (UID: "75a2a730-ea79-4e39-a0ca-eb1c8fac88df"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.145566 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-inventory" (OuterVolumeSpecName: "inventory") pod "75a2a730-ea79-4e39-a0ca-eb1c8fac88df" (UID: "75a2a730-ea79-4e39-a0ca-eb1c8fac88df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.148145 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "75a2a730-ea79-4e39-a0ca-eb1c8fac88df" (UID: "75a2a730-ea79-4e39-a0ca-eb1c8fac88df"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.164549 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "75a2a730-ea79-4e39-a0ca-eb1c8fac88df" (UID: "75a2a730-ea79-4e39-a0ca-eb1c8fac88df"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.220677 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.220981 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.221044 5017 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.221100 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkvlq\" (UniqueName: \"kubernetes.io/projected/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-kube-api-access-lkvlq\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.221214 5017 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.221287 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75a2a730-ea79-4e39-a0ca-eb1c8fac88df-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.342506 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" path="/var/lib/kubelet/pods/8a814ae7-7b6c-484b-b220-d5f2f3596d53/volumes" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.524946 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" event={"ID":"75a2a730-ea79-4e39-a0ca-eb1c8fac88df","Type":"ContainerDied","Data":"d6d88ea60895b342a0db9cc1c832a57fc86e18fb3375d61f807c2c67b9183246"} Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.525018 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6d88ea60895b342a0db9cc1c832a57fc86e18fb3375d61f807c2c67b9183246" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.525097 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-bmqnc" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.621152 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-5bld2"] Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622240 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerName="extract-utilities" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622264 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerName="extract-utilities" Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622273 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerName="registry-server" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622279 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerName="registry-server" Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622296 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="extract-utilities" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622302 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="extract-utilities" Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622320 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="registry-server" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622327 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="registry-server" Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622350 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="extract-content" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622358 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="extract-content" Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622367 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="extract-utilities" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622375 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="extract-utilities" Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622390 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerName="extract-content" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622397 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerName="extract-content" Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622407 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="registry-server" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622415 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="registry-server" Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622430 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a2a730-ea79-4e39-a0ca-eb1c8fac88df" containerName="neutron-sriov-openstack-openstack-cell1" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622439 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a2a730-ea79-4e39-a0ca-eb1c8fac88df" containerName="neutron-sriov-openstack-openstack-cell1" Jan 29 08:52:14 crc kubenswrapper[5017]: E0129 08:52:14.622458 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="extract-content" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622464 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="extract-content" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622745 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="705ffb5c-1bcb-4d3b-b066-0a9086c10e42" containerName="registry-server" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622767 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a814ae7-7b6c-484b-b220-d5f2f3596d53" containerName="registry-server" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622782 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a2a730-ea79-4e39-a0ca-eb1c8fac88df" containerName="neutron-sriov-openstack-openstack-cell1" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.622792 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd17bbb9-84c5-445c-978d-a4ea3b720c9c" containerName="registry-server" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.623869 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.627991 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.628155 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.628017 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.628328 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.628396 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.640230 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-5bld2"] Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.732491 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.732573 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.732637 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.732668 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbbn\" (UniqueName: \"kubernetes.io/projected/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-kube-api-access-pjbbn\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.732746 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.732878 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.835506 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.835607 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.835635 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.835673 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.835692 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbbn\" (UniqueName: \"kubernetes.io/projected/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-kube-api-access-pjbbn\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.835778 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.841660 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.842269 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.842788 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.849705 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.850120 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.856232 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbbn\" (UniqueName: \"kubernetes.io/projected/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-kube-api-access-pjbbn\") pod \"neutron-dhcp-openstack-openstack-cell1-5bld2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:14 crc kubenswrapper[5017]: I0129 08:52:14.953902 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:52:15 crc kubenswrapper[5017]: I0129 08:52:15.525855 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-5bld2"] Jan 29 08:52:15 crc kubenswrapper[5017]: I0129 08:52:15.546386 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" event={"ID":"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2","Type":"ContainerStarted","Data":"f3fd77b32e4512831be43e1597979a513085fde2c0e13ace9b6a7c59053b46ba"} Jan 29 08:52:16 crc kubenswrapper[5017]: I0129 08:52:16.559676 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" event={"ID":"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2","Type":"ContainerStarted","Data":"f3a430be650aea3a88658f6b99327376aa538749440d89b52fbeb032d4ecd47e"} Jan 29 08:52:16 crc kubenswrapper[5017]: I0129 08:52:16.584481 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" podStartSLOduration=2.013820393 podStartE2EDuration="2.584457026s" podCreationTimestamp="2026-01-29 08:52:14 +0000 UTC" firstStartedPulling="2026-01-29 08:52:15.534557753 +0000 UTC m=+8221.909005363" lastFinishedPulling="2026-01-29 08:52:16.105194386 +0000 UTC m=+8222.479641996" observedRunningTime="2026-01-29 08:52:16.576770138 +0000 UTC m=+8222.951217778" watchObservedRunningTime="2026-01-29 08:52:16.584457026 +0000 UTC m=+8222.958904626" Jan 29 08:52:26 crc kubenswrapper[5017]: I0129 08:52:26.539297 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:52:26 crc kubenswrapper[5017]: I0129 08:52:26.540162 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:52:56 crc kubenswrapper[5017]: I0129 08:52:56.539398 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:52:56 crc kubenswrapper[5017]: I0129 08:52:56.540154 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:53:26 crc kubenswrapper[5017]: I0129 08:53:26.064855 5017 generic.go:334] "Generic (PLEG): container finished" podID="dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" containerID="f3a430be650aea3a88658f6b99327376aa538749440d89b52fbeb032d4ecd47e" exitCode=0 Jan 29 08:53:26 crc kubenswrapper[5017]: I0129 08:53:26.065031 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" event={"ID":"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2","Type":"ContainerDied","Data":"f3a430be650aea3a88658f6b99327376aa538749440d89b52fbeb032d4ecd47e"} Jan 29 08:53:26 crc kubenswrapper[5017]: I0129 08:53:26.539694 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:53:26 crc kubenswrapper[5017]: I0129 08:53:26.539775 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:53:26 crc kubenswrapper[5017]: I0129 08:53:26.539827 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:53:26 crc kubenswrapper[5017]: I0129 08:53:26.540829 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3a246f65d4f8a1699917ba84874081742480fb79e7847e1046e7281f011e7ed"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:53:26 crc kubenswrapper[5017]: I0129 08:53:26.540898 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://e3a246f65d4f8a1699917ba84874081742480fb79e7847e1046e7281f011e7ed" gracePeriod=600 Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.080793 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="e3a246f65d4f8a1699917ba84874081742480fb79e7847e1046e7281f011e7ed" exitCode=0 Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.082655 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"e3a246f65d4f8a1699917ba84874081742480fb79e7847e1046e7281f011e7ed"} Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.082756 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284"} Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.082825 5017 scope.go:117] "RemoveContainer" containerID="9d0a7fd71d9f91170d4639ccbbd832deecb27b1123db941414d97c11d3459063" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.642584 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.761213 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-agent-neutron-config-0\") pod \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.761523 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-combined-ca-bundle\") pod \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.761737 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ceph\") pod \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.761824 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ssh-key-openstack-cell1\") pod \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.761981 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-inventory\") pod \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.762144 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjbbn\" (UniqueName: \"kubernetes.io/projected/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-kube-api-access-pjbbn\") pod \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\" (UID: \"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2\") " Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.769420 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-kube-api-access-pjbbn" (OuterVolumeSpecName: "kube-api-access-pjbbn") pod "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" (UID: "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2"). InnerVolumeSpecName "kube-api-access-pjbbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.770268 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ceph" (OuterVolumeSpecName: "ceph") pod "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" (UID: "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.771672 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" (UID: "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.799786 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" (UID: "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.802261 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" (UID: "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.804669 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-inventory" (OuterVolumeSpecName: "inventory") pod "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" (UID: "dc8124b4-0101-4623-ab9c-9f73a0ebc7d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.866726 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.866770 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjbbn\" (UniqueName: \"kubernetes.io/projected/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-kube-api-access-pjbbn\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.866782 5017 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.866791 5017 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.866801 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:27 crc kubenswrapper[5017]: I0129 08:53:27.866810 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc8124b4-0101-4623-ab9c-9f73a0ebc7d2-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:28 crc kubenswrapper[5017]: I0129 08:53:28.100404 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" event={"ID":"dc8124b4-0101-4623-ab9c-9f73a0ebc7d2","Type":"ContainerDied","Data":"f3fd77b32e4512831be43e1597979a513085fde2c0e13ace9b6a7c59053b46ba"} Jan 29 08:53:28 crc kubenswrapper[5017]: I0129 08:53:28.100890 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3fd77b32e4512831be43e1597979a513085fde2c0e13ace9b6a7c59053b46ba" Jan 29 08:53:28 crc kubenswrapper[5017]: I0129 08:53:28.100463 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-5bld2" Jan 29 08:53:52 crc kubenswrapper[5017]: I0129 08:53:52.826919 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:53:52 crc kubenswrapper[5017]: I0129 08:53:52.828214 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="be5ab308-5352-4f70-8c87-7dece924618f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28" gracePeriod=30 Jan 29 08:53:52 crc kubenswrapper[5017]: I0129 08:53:52.838942 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:53:52 crc kubenswrapper[5017]: I0129 08:53:52.839355 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" containerName="nova-cell1-conductor-conductor" containerID="cri-o://4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b" gracePeriod=30 Jan 29 08:53:53 crc kubenswrapper[5017]: E0129 08:53:53.042501 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:53:53 crc kubenswrapper[5017]: E0129 08:53:53.044024 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:53:53 crc kubenswrapper[5017]: E0129 08:53:53.045884 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:53:53 crc kubenswrapper[5017]: E0129 08:53:53.045989 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="be5ab308-5352-4f70-8c87-7dece924618f" containerName="nova-cell0-conductor-conductor" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.541682 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.542550 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="98b35242-2511-4cfc-9e84-1ad56cae8e44" containerName="nova-scheduler-scheduler" containerID="cri-o://77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" gracePeriod=30 Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.607235 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.607583 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-log" containerID="cri-o://74028403b788423eeb82aaef2e40e3f5c13304354441c1d7911f3c8239495d6d" gracePeriod=30 Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.608177 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-api" containerID="cri-o://f9cb25d9f6edfcea1a4eff8a548393e1c011616a63c78f33d0ab0e8a39705a5d" gracePeriod=30 Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.629422 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.629726 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-log" containerID="cri-o://c59b78bdafc8f3cae07251dc61dd074534d7d3e9570a1035f1eb1181e2a04336" gracePeriod=30 Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.630104 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-metadata" containerID="cri-o://f1ce74e3bc5840259a8cf6016b9ad7fdd995830cf20979001b9c4a01b2ae5abb" gracePeriod=30 Jan 29 08:53:53 crc kubenswrapper[5017]: E0129 08:53:53.733110 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b is running failed: container process not found" containerID="4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:53:53 crc kubenswrapper[5017]: E0129 08:53:53.737072 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b is running failed: container process not found" containerID="4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:53:53 crc kubenswrapper[5017]: E0129 08:53:53.747586 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b is running failed: container process not found" containerID="4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:53:53 crc kubenswrapper[5017]: E0129 08:53:53.747691 5017 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" containerName="nova-cell1-conductor-conductor" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.814059 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr"] Jan 29 08:53:53 crc kubenswrapper[5017]: E0129 08:53:53.814683 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.814701 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.814985 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8124b4-0101-4623-ab9c-9f73a0ebc7d2" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.816021 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.831353 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.831808 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.832028 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.832262 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.832460 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-8fskm" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.833482 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.836253 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr"] Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.842546 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.988640 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4d8\" (UniqueName: \"kubernetes.io/projected/188aa09e-22df-4d5c-a969-8eebbf23c644-kube-api-access-tl4d8\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989255 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989332 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989364 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989410 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989465 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989513 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989562 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989615 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989730 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:53 crc kubenswrapper[5017]: I0129 08:53:53.989916 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092068 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl4d8\" (UniqueName: \"kubernetes.io/projected/188aa09e-22df-4d5c-a969-8eebbf23c644-kube-api-access-tl4d8\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092142 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092225 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092254 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092287 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092319 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092346 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092374 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092404 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092463 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.092556 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.096411 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.103139 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.113241 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.118363 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.118827 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.120743 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.124185 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.124512 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl4d8\" (UniqueName: \"kubernetes.io/projected/188aa09e-22df-4d5c-a969-8eebbf23c644-kube-api-access-tl4d8\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.124612 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.125798 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.138893 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.188607 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.275697 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.301932 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-config-data\") pod \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.302085 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-combined-ca-bundle\") pod \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.302218 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxxrd\" (UniqueName: \"kubernetes.io/projected/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-kube-api-access-wxxrd\") pod \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\" (UID: \"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce\") " Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.310931 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-kube-api-access-wxxrd" (OuterVolumeSpecName: "kube-api-access-wxxrd") pod "22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" (UID: "22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce"). InnerVolumeSpecName "kube-api-access-wxxrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.350045 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-config-data" (OuterVolumeSpecName: "config-data") pod "22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" (UID: "22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.390356 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" (UID: "22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.404923 5017 generic.go:334] "Generic (PLEG): container finished" podID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerID="74028403b788423eeb82aaef2e40e3f5c13304354441c1d7911f3c8239495d6d" exitCode=143 Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.405101 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8fb7390-d44e-4c07-9eec-ee2d0856adc3","Type":"ContainerDied","Data":"74028403b788423eeb82aaef2e40e3f5c13304354441c1d7911f3c8239495d6d"} Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.410086 5017 generic.go:334] "Generic (PLEG): container finished" podID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerID="c59b78bdafc8f3cae07251dc61dd074534d7d3e9570a1035f1eb1181e2a04336" exitCode=143 Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.410166 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44ee44e9-325e-4aaa-9523-163177d2f47c","Type":"ContainerDied","Data":"c59b78bdafc8f3cae07251dc61dd074534d7d3e9570a1035f1eb1181e2a04336"} Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.412516 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.412584 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxxrd\" (UniqueName: \"kubernetes.io/projected/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-kube-api-access-wxxrd\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.412601 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.414990 5017 generic.go:334] "Generic (PLEG): container finished" podID="22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" containerID="4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b" exitCode=0 Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.415049 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce","Type":"ContainerDied","Data":"4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b"} Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.415088 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce","Type":"ContainerDied","Data":"8d93a78d4f5fba93eb3e2ad0c646ace6692c8cc47157aca4ba78a0219e989d97"} Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.415139 5017 scope.go:117] "RemoveContainer" containerID="4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.415342 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.453376 5017 scope.go:117] "RemoveContainer" containerID="4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b" Jan 29 08:53:54 crc kubenswrapper[5017]: E0129 08:53:54.454065 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b\": container with ID starting with 4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b not found: ID does not exist" containerID="4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.454117 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b"} err="failed to get container status \"4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b\": rpc error: code = NotFound desc = could not find container \"4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b\": container with ID starting with 4b2ed3de2925bf17d9113e2ffdc2c5d3e331b1db042eeee482d4934d578bd01b not found: ID does not exist" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.489532 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.502909 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:53:54 crc kubenswrapper[5017]: E0129 08:53:54.509504 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:53:54 crc kubenswrapper[5017]: E0129 08:53:54.511572 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:53:54 crc kubenswrapper[5017]: E0129 08:53:54.513198 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:53:54 crc kubenswrapper[5017]: E0129 08:53:54.513273 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="98b35242-2511-4cfc-9e84-1ad56cae8e44" containerName="nova-scheduler-scheduler" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.518942 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:53:54 crc kubenswrapper[5017]: E0129 08:53:54.519467 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" containerName="nova-cell1-conductor-conductor" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.519481 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" containerName="nova-cell1-conductor-conductor" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.519696 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" containerName="nova-cell1-conductor-conductor" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.520519 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.524266 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.538924 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.616413 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scc9s\" (UniqueName: \"kubernetes.io/projected/4463a745-19c8-413a-9788-a50b598ed3f5-kube-api-access-scc9s\") pod \"nova-cell1-conductor-0\" (UID: \"4463a745-19c8-413a-9788-a50b598ed3f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.616475 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4463a745-19c8-413a-9788-a50b598ed3f5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4463a745-19c8-413a-9788-a50b598ed3f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.616605 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4463a745-19c8-413a-9788-a50b598ed3f5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4463a745-19c8-413a-9788-a50b598ed3f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.718793 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scc9s\" (UniqueName: \"kubernetes.io/projected/4463a745-19c8-413a-9788-a50b598ed3f5-kube-api-access-scc9s\") pod \"nova-cell1-conductor-0\" (UID: \"4463a745-19c8-413a-9788-a50b598ed3f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.719453 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4463a745-19c8-413a-9788-a50b598ed3f5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4463a745-19c8-413a-9788-a50b598ed3f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.719619 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4463a745-19c8-413a-9788-a50b598ed3f5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4463a745-19c8-413a-9788-a50b598ed3f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.726854 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4463a745-19c8-413a-9788-a50b598ed3f5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4463a745-19c8-413a-9788-a50b598ed3f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.735765 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4463a745-19c8-413a-9788-a50b598ed3f5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4463a745-19c8-413a-9788-a50b598ed3f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.736346 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scc9s\" (UniqueName: \"kubernetes.io/projected/4463a745-19c8-413a-9788-a50b598ed3f5-kube-api-access-scc9s\") pod \"nova-cell1-conductor-0\" (UID: \"4463a745-19c8-413a-9788-a50b598ed3f5\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.847569 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:54 crc kubenswrapper[5017]: I0129 08:53:54.862927 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr"] Jan 29 08:53:55 crc kubenswrapper[5017]: I0129 08:53:55.351339 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:53:55 crc kubenswrapper[5017]: W0129 08:53:55.358520 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4463a745_19c8_413a_9788_a50b598ed3f5.slice/crio-a5f9a0e4a2a3daf214e2dc43508d6d618ca278b967e8bae736cc18a8282a1953 WatchSource:0}: Error finding container a5f9a0e4a2a3daf214e2dc43508d6d618ca278b967e8bae736cc18a8282a1953: Status 404 returned error can't find the container with id a5f9a0e4a2a3daf214e2dc43508d6d618ca278b967e8bae736cc18a8282a1953 Jan 29 08:53:55 crc kubenswrapper[5017]: I0129 08:53:55.434029 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" event={"ID":"188aa09e-22df-4d5c-a969-8eebbf23c644","Type":"ContainerStarted","Data":"4eba1cedce926de44b960b88a9a5c0c02d31708013cd82512f517a206f81c316"} Jan 29 08:53:55 crc kubenswrapper[5017]: I0129 08:53:55.440458 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4463a745-19c8-413a-9788-a50b598ed3f5","Type":"ContainerStarted","Data":"a5f9a0e4a2a3daf214e2dc43508d6d618ca278b967e8bae736cc18a8282a1953"} Jan 29 08:53:56 crc kubenswrapper[5017]: I0129 08:53:56.333140 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce" path="/var/lib/kubelet/pods/22e1fbe8-6cda-4d37-a0d4-2a2aa6ce84ce/volumes" Jan 29 08:53:56 crc kubenswrapper[5017]: I0129 08:53:56.453897 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4463a745-19c8-413a-9788-a50b598ed3f5","Type":"ContainerStarted","Data":"093ed17ebb1f4838310d612e909d507da22b0cc62a871f2e10c12d38edfadbd5"} Jan 29 08:53:56 crc kubenswrapper[5017]: I0129 08:53:56.454001 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 08:53:56 crc kubenswrapper[5017]: I0129 08:53:56.456789 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" event={"ID":"188aa09e-22df-4d5c-a969-8eebbf23c644","Type":"ContainerStarted","Data":"05d08edea45537ef0124622ba0565d15e32581e8c456c8eb39c537fc15317ffe"} Jan 29 08:53:56 crc kubenswrapper[5017]: I0129 08:53:56.487948 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.487923454 podStartE2EDuration="2.487923454s" podCreationTimestamp="2026-01-29 08:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:53:56.472416065 +0000 UTC m=+8322.846863675" watchObservedRunningTime="2026-01-29 08:53:56.487923454 +0000 UTC m=+8322.862371064" Jan 29 08:53:56 crc kubenswrapper[5017]: I0129 08:53:56.511777 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" podStartSLOduration=2.8510073719999998 podStartE2EDuration="3.511753606s" podCreationTimestamp="2026-01-29 08:53:53 +0000 UTC" firstStartedPulling="2026-01-29 08:53:54.874085443 +0000 UTC m=+8321.248533053" lastFinishedPulling="2026-01-29 08:53:55.534831677 +0000 UTC m=+8321.909279287" observedRunningTime="2026-01-29 08:53:56.497644891 +0000 UTC m=+8322.872092511" watchObservedRunningTime="2026-01-29 08:53:56.511753606 +0000 UTC m=+8322.886201216" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.234468 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": read tcp 10.217.0.2:60176->10.217.1.82:8775: read: connection reset by peer" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.234601 5017 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.82:8775/\": read tcp 10.217.0.2:60174->10.217.1.82:8775: read: connection reset by peer" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.501148 5017 generic.go:334] "Generic (PLEG): container finished" podID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerID="f9cb25d9f6edfcea1a4eff8a548393e1c011616a63c78f33d0ab0e8a39705a5d" exitCode=0 Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.505713 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8fb7390-d44e-4c07-9eec-ee2d0856adc3","Type":"ContainerDied","Data":"f9cb25d9f6edfcea1a4eff8a548393e1c011616a63c78f33d0ab0e8a39705a5d"} Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.519180 5017 generic.go:334] "Generic (PLEG): container finished" podID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerID="f1ce74e3bc5840259a8cf6016b9ad7fdd995830cf20979001b9c4a01b2ae5abb" exitCode=0 Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.520426 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44ee44e9-325e-4aaa-9523-163177d2f47c","Type":"ContainerDied","Data":"f1ce74e3bc5840259a8cf6016b9ad7fdd995830cf20979001b9c4a01b2ae5abb"} Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.725071 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.807655 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-logs\") pod \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.807746 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-config-data\") pod \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.807830 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kb5\" (UniqueName: \"kubernetes.io/projected/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-kube-api-access-l2kb5\") pod \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.807885 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-combined-ca-bundle\") pod \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\" (UID: \"a8fb7390-d44e-4c07-9eec-ee2d0856adc3\") " Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.808607 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-logs" (OuterVolumeSpecName: "logs") pod "a8fb7390-d44e-4c07-9eec-ee2d0856adc3" (UID: "a8fb7390-d44e-4c07-9eec-ee2d0856adc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.821213 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-kube-api-access-l2kb5" (OuterVolumeSpecName: "kube-api-access-l2kb5") pod "a8fb7390-d44e-4c07-9eec-ee2d0856adc3" (UID: "a8fb7390-d44e-4c07-9eec-ee2d0856adc3"). InnerVolumeSpecName "kube-api-access-l2kb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.846016 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8fb7390-d44e-4c07-9eec-ee2d0856adc3" (UID: "a8fb7390-d44e-4c07-9eec-ee2d0856adc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.864906 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-config-data" (OuterVolumeSpecName: "config-data") pod "a8fb7390-d44e-4c07-9eec-ee2d0856adc3" (UID: "a8fb7390-d44e-4c07-9eec-ee2d0856adc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.898296 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.911315 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.911386 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.911399 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kb5\" (UniqueName: \"kubernetes.io/projected/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-kube-api-access-l2kb5\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:57 crc kubenswrapper[5017]: I0129 08:53:57.911410 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7390-d44e-4c07-9eec-ee2d0856adc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.012903 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86c87\" (UniqueName: \"kubernetes.io/projected/44ee44e9-325e-4aaa-9523-163177d2f47c-kube-api-access-86c87\") pod \"44ee44e9-325e-4aaa-9523-163177d2f47c\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.013147 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-combined-ca-bundle\") pod \"44ee44e9-325e-4aaa-9523-163177d2f47c\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.013221 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44ee44e9-325e-4aaa-9523-163177d2f47c-logs\") pod \"44ee44e9-325e-4aaa-9523-163177d2f47c\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.013380 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-config-data\") pod \"44ee44e9-325e-4aaa-9523-163177d2f47c\" (UID: \"44ee44e9-325e-4aaa-9523-163177d2f47c\") " Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.013818 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44ee44e9-325e-4aaa-9523-163177d2f47c-logs" (OuterVolumeSpecName: "logs") pod "44ee44e9-325e-4aaa-9523-163177d2f47c" (UID: "44ee44e9-325e-4aaa-9523-163177d2f47c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.014401 5017 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44ee44e9-325e-4aaa-9523-163177d2f47c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.018518 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ee44e9-325e-4aaa-9523-163177d2f47c-kube-api-access-86c87" (OuterVolumeSpecName: "kube-api-access-86c87") pod "44ee44e9-325e-4aaa-9523-163177d2f47c" (UID: "44ee44e9-325e-4aaa-9523-163177d2f47c"). InnerVolumeSpecName "kube-api-access-86c87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:53:58 crc kubenswrapper[5017]: E0129 08:53:58.042324 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:53:58 crc kubenswrapper[5017]: E0129 08:53:58.044748 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:53:58 crc kubenswrapper[5017]: E0129 08:53:58.047179 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 08:53:58 crc kubenswrapper[5017]: E0129 08:53:58.047252 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="be5ab308-5352-4f70-8c87-7dece924618f" containerName="nova-cell0-conductor-conductor" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.052650 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-config-data" (OuterVolumeSpecName: "config-data") pod "44ee44e9-325e-4aaa-9523-163177d2f47c" (UID: "44ee44e9-325e-4aaa-9523-163177d2f47c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.052738 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44ee44e9-325e-4aaa-9523-163177d2f47c" (UID: "44ee44e9-325e-4aaa-9523-163177d2f47c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.117219 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.117268 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86c87\" (UniqueName: \"kubernetes.io/projected/44ee44e9-325e-4aaa-9523-163177d2f47c-kube-api-access-86c87\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.117286 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ee44e9-325e-4aaa-9523-163177d2f47c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.531683 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8fb7390-d44e-4c07-9eec-ee2d0856adc3","Type":"ContainerDied","Data":"38719233cfca48eee199abf5d8cf436793d0c062b97db05cf9391fa787fdbdae"} Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.531773 5017 scope.go:117] "RemoveContainer" containerID="f9cb25d9f6edfcea1a4eff8a548393e1c011616a63c78f33d0ab0e8a39705a5d" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.532022 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.535991 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"44ee44e9-325e-4aaa-9523-163177d2f47c","Type":"ContainerDied","Data":"ba76f05c9b4d6762c30674fb761c6b606a20074b8d78fd41635e19966048bae2"} Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.536059 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.576628 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.587859 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.600604 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 08:53:58 crc kubenswrapper[5017]: E0129 08:53:58.601739 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-api" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.601767 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-api" Jan 29 08:53:58 crc kubenswrapper[5017]: E0129 08:53:58.601787 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-metadata" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.601795 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-metadata" Jan 29 08:53:58 crc kubenswrapper[5017]: E0129 08:53:58.602131 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-log" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.602152 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-log" Jan 29 08:53:58 crc kubenswrapper[5017]: E0129 08:53:58.602172 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-log" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.602182 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-log" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.602500 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-metadata" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.602530 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-log" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.602545 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" containerName="nova-api-api" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.602559 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" containerName="nova-metadata-log" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.603914 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.610676 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.611594 5017 scope.go:117] "RemoveContainer" containerID="74028403b788423eeb82aaef2e40e3f5c13304354441c1d7911f3c8239495d6d" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.611784 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.623537 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.632401 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.648202 5017 scope.go:117] "RemoveContainer" containerID="f1ce74e3bc5840259a8cf6016b9ad7fdd995830cf20979001b9c4a01b2ae5abb" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.663581 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.665769 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.675921 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.682260 5017 scope.go:117] "RemoveContainer" containerID="c59b78bdafc8f3cae07251dc61dd074534d7d3e9570a1035f1eb1181e2a04336" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.697496 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.734741 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8346093-03be-4dc7-a1b9-b188c05e14fc-logs\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.734818 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfpmh\" (UniqueName: \"kubernetes.io/projected/e8346093-03be-4dc7-a1b9-b188c05e14fc-kube-api-access-kfpmh\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.734879 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzmgp\" (UniqueName: \"kubernetes.io/projected/642b071c-b157-45af-a981-7adb4df3699d-kube-api-access-rzmgp\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.735057 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8346093-03be-4dc7-a1b9-b188c05e14fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.735144 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/642b071c-b157-45af-a981-7adb4df3699d-logs\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.735540 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8346093-03be-4dc7-a1b9-b188c05e14fc-config-data\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.735722 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642b071c-b157-45af-a981-7adb4df3699d-config-data\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.735895 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b071c-b157-45af-a981-7adb4df3699d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.837655 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8346093-03be-4dc7-a1b9-b188c05e14fc-config-data\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.837738 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642b071c-b157-45af-a981-7adb4df3699d-config-data\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.837781 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b071c-b157-45af-a981-7adb4df3699d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.837833 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8346093-03be-4dc7-a1b9-b188c05e14fc-logs\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.837869 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfpmh\" (UniqueName: \"kubernetes.io/projected/e8346093-03be-4dc7-a1b9-b188c05e14fc-kube-api-access-kfpmh\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.837919 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzmgp\" (UniqueName: \"kubernetes.io/projected/642b071c-b157-45af-a981-7adb4df3699d-kube-api-access-rzmgp\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.837989 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8346093-03be-4dc7-a1b9-b188c05e14fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.838014 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/642b071c-b157-45af-a981-7adb4df3699d-logs\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.838537 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/642b071c-b157-45af-a981-7adb4df3699d-logs\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.838631 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8346093-03be-4dc7-a1b9-b188c05e14fc-logs\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.842604 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8346093-03be-4dc7-a1b9-b188c05e14fc-config-data\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.846934 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8346093-03be-4dc7-a1b9-b188c05e14fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.846993 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b071c-b157-45af-a981-7adb4df3699d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.847036 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642b071c-b157-45af-a981-7adb4df3699d-config-data\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.859224 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzmgp\" (UniqueName: \"kubernetes.io/projected/642b071c-b157-45af-a981-7adb4df3699d-kube-api-access-rzmgp\") pod \"nova-metadata-0\" (UID: \"642b071c-b157-45af-a981-7adb4df3699d\") " pod="openstack/nova-metadata-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.861510 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfpmh\" (UniqueName: \"kubernetes.io/projected/e8346093-03be-4dc7-a1b9-b188c05e14fc-kube-api-access-kfpmh\") pod \"nova-api-0\" (UID: \"e8346093-03be-4dc7-a1b9-b188c05e14fc\") " pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.941483 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:53:58 crc kubenswrapper[5017]: I0129 08:53:58.997776 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:53:59 crc kubenswrapper[5017]: I0129 08:53:59.498735 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:53:59 crc kubenswrapper[5017]: E0129 08:53:59.507842 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:53:59 crc kubenswrapper[5017]: E0129 08:53:59.509650 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:53:59 crc kubenswrapper[5017]: E0129 08:53:59.511556 5017 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:53:59 crc kubenswrapper[5017]: E0129 08:53:59.511654 5017 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="98b35242-2511-4cfc-9e84-1ad56cae8e44" containerName="nova-scheduler-scheduler" Jan 29 08:53:59 crc kubenswrapper[5017]: I0129 08:53:59.553294 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8346093-03be-4dc7-a1b9-b188c05e14fc","Type":"ContainerStarted","Data":"e9b1533656203e285a444e3ab1003053c0395e8ee6ae80795064e54d1419f5d1"} Jan 29 08:53:59 crc kubenswrapper[5017]: I0129 08:53:59.597612 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.144308 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.282766 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-config-data\") pod \"98b35242-2511-4cfc-9e84-1ad56cae8e44\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.283397 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmd95\" (UniqueName: \"kubernetes.io/projected/98b35242-2511-4cfc-9e84-1ad56cae8e44-kube-api-access-cmd95\") pod \"98b35242-2511-4cfc-9e84-1ad56cae8e44\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.283491 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-combined-ca-bundle\") pod \"98b35242-2511-4cfc-9e84-1ad56cae8e44\" (UID: \"98b35242-2511-4cfc-9e84-1ad56cae8e44\") " Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.289936 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b35242-2511-4cfc-9e84-1ad56cae8e44-kube-api-access-cmd95" (OuterVolumeSpecName: "kube-api-access-cmd95") pod "98b35242-2511-4cfc-9e84-1ad56cae8e44" (UID: "98b35242-2511-4cfc-9e84-1ad56cae8e44"). InnerVolumeSpecName "kube-api-access-cmd95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.316425 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-config-data" (OuterVolumeSpecName: "config-data") pod "98b35242-2511-4cfc-9e84-1ad56cae8e44" (UID: "98b35242-2511-4cfc-9e84-1ad56cae8e44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.320173 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98b35242-2511-4cfc-9e84-1ad56cae8e44" (UID: "98b35242-2511-4cfc-9e84-1ad56cae8e44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.330306 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ee44e9-325e-4aaa-9523-163177d2f47c" path="/var/lib/kubelet/pods/44ee44e9-325e-4aaa-9523-163177d2f47c/volumes" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.332280 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fb7390-d44e-4c07-9eec-ee2d0856adc3" path="/var/lib/kubelet/pods/a8fb7390-d44e-4c07-9eec-ee2d0856adc3/volumes" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.386261 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.386326 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmd95\" (UniqueName: \"kubernetes.io/projected/98b35242-2511-4cfc-9e84-1ad56cae8e44-kube-api-access-cmd95\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.386343 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b35242-2511-4cfc-9e84-1ad56cae8e44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.566667 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8346093-03be-4dc7-a1b9-b188c05e14fc","Type":"ContainerStarted","Data":"377ddb8a551178cc3b33ca48d49a953f0d2e13d2e5a0a550cd1ad2b99cce4cee"} Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.566720 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8346093-03be-4dc7-a1b9-b188c05e14fc","Type":"ContainerStarted","Data":"21a83e482bb99d726295b96ce4c9fb464b963dd5b35afb2692d5c1742b3dc185"} Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.570982 5017 generic.go:334] "Generic (PLEG): container finished" podID="98b35242-2511-4cfc-9e84-1ad56cae8e44" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" exitCode=0 Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.571050 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98b35242-2511-4cfc-9e84-1ad56cae8e44","Type":"ContainerDied","Data":"77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc"} Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.571088 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98b35242-2511-4cfc-9e84-1ad56cae8e44","Type":"ContainerDied","Data":"9bb97df468372d172c9bbb51028c97e82b6a1deedd64e731c0126b6a90ee1e89"} Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.571091 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.571108 5017 scope.go:117] "RemoveContainer" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.578116 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"642b071c-b157-45af-a981-7adb4df3699d","Type":"ContainerStarted","Data":"43bdf42b240e52c38f99fcea553c7aaf143715f263c717a9063c2bafffc1ff64"} Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.578179 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"642b071c-b157-45af-a981-7adb4df3699d","Type":"ContainerStarted","Data":"74d47f31643751d14c1ce806e36c29c84fdac818b2645e5e43fd48a57cebdf3d"} Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.578193 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"642b071c-b157-45af-a981-7adb4df3699d","Type":"ContainerStarted","Data":"82ce3cbdecef6a0442f0254d4ed010f3ece82728b03d1242602420fe75e6f67a"} Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.595890 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.595863495 podStartE2EDuration="2.595863495s" podCreationTimestamp="2026-01-29 08:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:54:00.593095897 +0000 UTC m=+8326.967543517" watchObservedRunningTime="2026-01-29 08:54:00.595863495 +0000 UTC m=+8326.970311105" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.634562 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.63453328 podStartE2EDuration="2.63453328s" podCreationTimestamp="2026-01-29 08:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:54:00.624161047 +0000 UTC m=+8326.998608657" watchObservedRunningTime="2026-01-29 08:54:00.63453328 +0000 UTC m=+8327.008980890" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.654507 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.657836 5017 scope.go:117] "RemoveContainer" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" Jan 29 08:54:00 crc kubenswrapper[5017]: E0129 08:54:00.658369 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc\": container with ID starting with 77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc not found: ID does not exist" containerID="77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.658450 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc"} err="failed to get container status \"77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc\": rpc error: code = NotFound desc = could not find container \"77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc\": container with ID starting with 77130bd3da419a8add4c59ea5cf4fed12c1bfca8c5dc5f55bc5171c907b91abc not found: ID does not exist" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.666821 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.676862 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:54:00 crc kubenswrapper[5017]: E0129 08:54:00.677455 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b35242-2511-4cfc-9e84-1ad56cae8e44" containerName="nova-scheduler-scheduler" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.677475 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b35242-2511-4cfc-9e84-1ad56cae8e44" containerName="nova-scheduler-scheduler" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.677683 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b35242-2511-4cfc-9e84-1ad56cae8e44" containerName="nova-scheduler-scheduler" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.678571 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.685220 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.687412 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.802453 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2566d072-f480-4b77-a28f-1f91ec555597-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2566d072-f480-4b77-a28f-1f91ec555597\") " pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.802907 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqt7p\" (UniqueName: \"kubernetes.io/projected/2566d072-f480-4b77-a28f-1f91ec555597-kube-api-access-qqt7p\") pod \"nova-scheduler-0\" (UID: \"2566d072-f480-4b77-a28f-1f91ec555597\") " pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.803201 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2566d072-f480-4b77-a28f-1f91ec555597-config-data\") pod \"nova-scheduler-0\" (UID: \"2566d072-f480-4b77-a28f-1f91ec555597\") " pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.905781 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2566d072-f480-4b77-a28f-1f91ec555597-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2566d072-f480-4b77-a28f-1f91ec555597\") " pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.906559 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqt7p\" (UniqueName: \"kubernetes.io/projected/2566d072-f480-4b77-a28f-1f91ec555597-kube-api-access-qqt7p\") pod \"nova-scheduler-0\" (UID: \"2566d072-f480-4b77-a28f-1f91ec555597\") " pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.906750 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2566d072-f480-4b77-a28f-1f91ec555597-config-data\") pod \"nova-scheduler-0\" (UID: \"2566d072-f480-4b77-a28f-1f91ec555597\") " pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.911917 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2566d072-f480-4b77-a28f-1f91ec555597-config-data\") pod \"nova-scheduler-0\" (UID: \"2566d072-f480-4b77-a28f-1f91ec555597\") " pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.911974 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2566d072-f480-4b77-a28f-1f91ec555597-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2566d072-f480-4b77-a28f-1f91ec555597\") " pod="openstack/nova-scheduler-0" Jan 29 08:54:00 crc kubenswrapper[5017]: I0129 08:54:00.929867 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqt7p\" (UniqueName: \"kubernetes.io/projected/2566d072-f480-4b77-a28f-1f91ec555597-kube-api-access-qqt7p\") pod \"nova-scheduler-0\" (UID: \"2566d072-f480-4b77-a28f-1f91ec555597\") " pod="openstack/nova-scheduler-0" Jan 29 08:54:01 crc kubenswrapper[5017]: I0129 08:54:01.003322 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:54:01 crc kubenswrapper[5017]: I0129 08:54:01.474924 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:54:01 crc kubenswrapper[5017]: W0129 08:54:01.476502 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2566d072_f480_4b77_a28f_1f91ec555597.slice/crio-85baeec51c812e044d9c2df10fd6b626fb36832651cc64552b0e94e244cd5ea4 WatchSource:0}: Error finding container 85baeec51c812e044d9c2df10fd6b626fb36832651cc64552b0e94e244cd5ea4: Status 404 returned error can't find the container with id 85baeec51c812e044d9c2df10fd6b626fb36832651cc64552b0e94e244cd5ea4 Jan 29 08:54:01 crc kubenswrapper[5017]: I0129 08:54:01.590036 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2566d072-f480-4b77-a28f-1f91ec555597","Type":"ContainerStarted","Data":"85baeec51c812e044d9c2df10fd6b626fb36832651cc64552b0e94e244cd5ea4"} Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.335463 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b35242-2511-4cfc-9e84-1ad56cae8e44" path="/var/lib/kubelet/pods/98b35242-2511-4cfc-9e84-1ad56cae8e44/volumes" Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.608423 5017 generic.go:334] "Generic (PLEG): container finished" podID="be5ab308-5352-4f70-8c87-7dece924618f" containerID="8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28" exitCode=0 Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.608520 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"be5ab308-5352-4f70-8c87-7dece924618f","Type":"ContainerDied","Data":"8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28"} Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.608595 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"be5ab308-5352-4f70-8c87-7dece924618f","Type":"ContainerDied","Data":"22c879e8e0eb4dd72afe1dc07bf62ccecaf8f6f3879e0aa765587004dd68e337"} Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.608614 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22c879e8e0eb4dd72afe1dc07bf62ccecaf8f6f3879e0aa765587004dd68e337" Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.612970 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2566d072-f480-4b77-a28f-1f91ec555597","Type":"ContainerStarted","Data":"23bbd92d55ed6c53e339edc69f1f84b558562b309fea352c50c44149ec5646ae"} Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.651419 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.651397439 podStartE2EDuration="2.651397439s" podCreationTimestamp="2026-01-29 08:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:54:02.642945172 +0000 UTC m=+8329.017392812" watchObservedRunningTime="2026-01-29 08:54:02.651397439 +0000 UTC m=+8329.025845049" Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.698177 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.863912 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-combined-ca-bundle\") pod \"be5ab308-5352-4f70-8c87-7dece924618f\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.864625 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vv9h\" (UniqueName: \"kubernetes.io/projected/be5ab308-5352-4f70-8c87-7dece924618f-kube-api-access-8vv9h\") pod \"be5ab308-5352-4f70-8c87-7dece924618f\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.864784 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-config-data\") pod \"be5ab308-5352-4f70-8c87-7dece924618f\" (UID: \"be5ab308-5352-4f70-8c87-7dece924618f\") " Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.873010 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be5ab308-5352-4f70-8c87-7dece924618f-kube-api-access-8vv9h" (OuterVolumeSpecName: "kube-api-access-8vv9h") pod "be5ab308-5352-4f70-8c87-7dece924618f" (UID: "be5ab308-5352-4f70-8c87-7dece924618f"). InnerVolumeSpecName "kube-api-access-8vv9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.903310 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be5ab308-5352-4f70-8c87-7dece924618f" (UID: "be5ab308-5352-4f70-8c87-7dece924618f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.912657 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-config-data" (OuterVolumeSpecName: "config-data") pod "be5ab308-5352-4f70-8c87-7dece924618f" (UID: "be5ab308-5352-4f70-8c87-7dece924618f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.968362 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.968409 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vv9h\" (UniqueName: \"kubernetes.io/projected/be5ab308-5352-4f70-8c87-7dece924618f-kube-api-access-8vv9h\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:02 crc kubenswrapper[5017]: I0129 08:54:02.968422 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5ab308-5352-4f70-8c87-7dece924618f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.635650 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.671798 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.687302 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.697630 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:54:03 crc kubenswrapper[5017]: E0129 08:54:03.698371 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be5ab308-5352-4f70-8c87-7dece924618f" containerName="nova-cell0-conductor-conductor" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.698406 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5ab308-5352-4f70-8c87-7dece924618f" containerName="nova-cell0-conductor-conductor" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.698716 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="be5ab308-5352-4f70-8c87-7dece924618f" containerName="nova-cell0-conductor-conductor" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.699625 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.703745 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.712094 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.787865 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e1ea44-6179-4454-8176-911d94bfdb6a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"65e1ea44-6179-4454-8176-911d94bfdb6a\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.788023 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt7bk\" (UniqueName: \"kubernetes.io/projected/65e1ea44-6179-4454-8176-911d94bfdb6a-kube-api-access-dt7bk\") pod \"nova-cell0-conductor-0\" (UID: \"65e1ea44-6179-4454-8176-911d94bfdb6a\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.788065 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e1ea44-6179-4454-8176-911d94bfdb6a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"65e1ea44-6179-4454-8176-911d94bfdb6a\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.894302 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e1ea44-6179-4454-8176-911d94bfdb6a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"65e1ea44-6179-4454-8176-911d94bfdb6a\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.895026 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e1ea44-6179-4454-8176-911d94bfdb6a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"65e1ea44-6179-4454-8176-911d94bfdb6a\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.895182 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt7bk\" (UniqueName: \"kubernetes.io/projected/65e1ea44-6179-4454-8176-911d94bfdb6a-kube-api-access-dt7bk\") pod \"nova-cell0-conductor-0\" (UID: \"65e1ea44-6179-4454-8176-911d94bfdb6a\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.903947 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65e1ea44-6179-4454-8176-911d94bfdb6a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"65e1ea44-6179-4454-8176-911d94bfdb6a\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.903973 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65e1ea44-6179-4454-8176-911d94bfdb6a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"65e1ea44-6179-4454-8176-911d94bfdb6a\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.915768 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt7bk\" (UniqueName: \"kubernetes.io/projected/65e1ea44-6179-4454-8176-911d94bfdb6a-kube-api-access-dt7bk\") pod \"nova-cell0-conductor-0\" (UID: \"65e1ea44-6179-4454-8176-911d94bfdb6a\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.998827 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:54:03 crc kubenswrapper[5017]: I0129 08:54:03.999000 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:54:04 crc kubenswrapper[5017]: I0129 08:54:04.025811 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:04 crc kubenswrapper[5017]: I0129 08:54:04.329037 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be5ab308-5352-4f70-8c87-7dece924618f" path="/var/lib/kubelet/pods/be5ab308-5352-4f70-8c87-7dece924618f/volumes" Jan 29 08:54:04 crc kubenswrapper[5017]: I0129 08:54:04.515943 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:54:04 crc kubenswrapper[5017]: W0129 08:54:04.522121 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65e1ea44_6179_4454_8176_911d94bfdb6a.slice/crio-9879b3ef473dce4b14ebd4f87953882391a92614ea6e0d75952b1dd668a0be4f WatchSource:0}: Error finding container 9879b3ef473dce4b14ebd4f87953882391a92614ea6e0d75952b1dd668a0be4f: Status 404 returned error can't find the container with id 9879b3ef473dce4b14ebd4f87953882391a92614ea6e0d75952b1dd668a0be4f Jan 29 08:54:04 crc kubenswrapper[5017]: I0129 08:54:04.665365 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"65e1ea44-6179-4454-8176-911d94bfdb6a","Type":"ContainerStarted","Data":"9879b3ef473dce4b14ebd4f87953882391a92614ea6e0d75952b1dd668a0be4f"} Jan 29 08:54:04 crc kubenswrapper[5017]: I0129 08:54:04.877707 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 08:54:05 crc kubenswrapper[5017]: I0129 08:54:05.679996 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"65e1ea44-6179-4454-8176-911d94bfdb6a","Type":"ContainerStarted","Data":"6a4c5af3490494615f7d5e6dc092b8f3057f73654ce87392981189489c3764a2"} Jan 29 08:54:05 crc kubenswrapper[5017]: I0129 08:54:05.681291 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:05 crc kubenswrapper[5017]: I0129 08:54:05.701788 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.7017672900000003 podStartE2EDuration="2.70176729s" podCreationTimestamp="2026-01-29 08:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:54:05.700595882 +0000 UTC m=+8332.075043492" watchObservedRunningTime="2026-01-29 08:54:05.70176729 +0000 UTC m=+8332.076214900" Jan 29 08:54:06 crc kubenswrapper[5017]: I0129 08:54:06.004271 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 08:54:08 crc kubenswrapper[5017]: I0129 08:54:08.941753 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:54:08 crc kubenswrapper[5017]: I0129 08:54:08.942526 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:54:08 crc kubenswrapper[5017]: I0129 08:54:08.998060 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:54:08 crc kubenswrapper[5017]: I0129 08:54:08.998108 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:54:09 crc kubenswrapper[5017]: I0129 08:54:09.057410 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 08:54:10 crc kubenswrapper[5017]: I0129 08:54:10.024334 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8346093-03be-4dc7-a1b9-b188c05e14fc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:54:10 crc kubenswrapper[5017]: I0129 08:54:10.024421 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8346093-03be-4dc7-a1b9-b188c05e14fc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:54:10 crc kubenswrapper[5017]: I0129 08:54:10.106313 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="642b071c-b157-45af-a981-7adb4df3699d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.186:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:54:10 crc kubenswrapper[5017]: I0129 08:54:10.106765 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="642b071c-b157-45af-a981-7adb4df3699d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.186:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:54:11 crc kubenswrapper[5017]: I0129 08:54:11.005033 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 08:54:11 crc kubenswrapper[5017]: I0129 08:54:11.047050 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 08:54:11 crc kubenswrapper[5017]: I0129 08:54:11.773332 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 08:54:18 crc kubenswrapper[5017]: I0129 08:54:18.946084 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:54:18 crc kubenswrapper[5017]: I0129 08:54:18.947477 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:54:18 crc kubenswrapper[5017]: I0129 08:54:18.950466 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:54:18 crc kubenswrapper[5017]: I0129 08:54:18.953575 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:54:19 crc kubenswrapper[5017]: I0129 08:54:19.006251 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 08:54:19 crc kubenswrapper[5017]: I0129 08:54:19.009226 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 08:54:19 crc kubenswrapper[5017]: I0129 08:54:19.011015 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 08:54:19 crc kubenswrapper[5017]: I0129 08:54:19.819726 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:54:19 crc kubenswrapper[5017]: I0129 08:54:19.823775 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 08:54:19 crc kubenswrapper[5017]: I0129 08:54:19.823869 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:54:37 crc kubenswrapper[5017]: I0129 08:54:37.931649 5017 scope.go:117] "RemoveContainer" containerID="8b588cbb0334ba81e6fe0e0e4c78263bb8221274b52433a5ba0535e999597f28" Jan 29 08:55:26 crc kubenswrapper[5017]: I0129 08:55:26.538878 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:55:26 crc kubenswrapper[5017]: I0129 08:55:26.539891 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:55:56 crc kubenswrapper[5017]: I0129 08:55:56.539072 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:55:56 crc kubenswrapper[5017]: I0129 08:55:56.540105 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:56:26 crc kubenswrapper[5017]: I0129 08:56:26.540129 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:56:26 crc kubenswrapper[5017]: I0129 08:56:26.541084 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:56:26 crc kubenswrapper[5017]: I0129 08:56:26.541151 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 08:56:26 crc kubenswrapper[5017]: I0129 08:56:26.542313 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:56:26 crc kubenswrapper[5017]: I0129 08:56:26.542388 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" gracePeriod=600 Jan 29 08:56:26 crc kubenswrapper[5017]: E0129 08:56:26.691442 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:56:27 crc kubenswrapper[5017]: I0129 08:56:27.232924 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" exitCode=0 Jan 29 08:56:27 crc kubenswrapper[5017]: I0129 08:56:27.232993 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284"} Jan 29 08:56:27 crc kubenswrapper[5017]: I0129 08:56:27.233036 5017 scope.go:117] "RemoveContainer" containerID="e3a246f65d4f8a1699917ba84874081742480fb79e7847e1046e7281f011e7ed" Jan 29 08:56:27 crc kubenswrapper[5017]: I0129 08:56:27.233844 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:56:27 crc kubenswrapper[5017]: E0129 08:56:27.234145 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:56:32 crc kubenswrapper[5017]: I0129 08:56:32.294989 5017 generic.go:334] "Generic (PLEG): container finished" podID="188aa09e-22df-4d5c-a969-8eebbf23c644" containerID="05d08edea45537ef0124622ba0565d15e32581e8c456c8eb39c537fc15317ffe" exitCode=0 Jan 29 08:56:32 crc kubenswrapper[5017]: I0129 08:56:32.295096 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" event={"ID":"188aa09e-22df-4d5c-a969-8eebbf23c644","Type":"ContainerDied","Data":"05d08edea45537ef0124622ba0565d15e32581e8c456c8eb39c537fc15317ffe"} Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.832300 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.874836 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ceph\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.874982 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl4d8\" (UniqueName: \"kubernetes.io/projected/188aa09e-22df-4d5c-a969-8eebbf23c644-kube-api-access-tl4d8\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.875020 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-1\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.875080 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-combined-ca-bundle\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.875199 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-1\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.875256 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-inventory\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.875403 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-1\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.875428 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-0\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.875493 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-0\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.875544 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-0\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.875619 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ssh-key-openstack-cell1\") pod \"188aa09e-22df-4d5c-a969-8eebbf23c644\" (UID: \"188aa09e-22df-4d5c-a969-8eebbf23c644\") " Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.889245 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.899276 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188aa09e-22df-4d5c-a969-8eebbf23c644-kube-api-access-tl4d8" (OuterVolumeSpecName: "kube-api-access-tl4d8") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "kube-api-access-tl4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.904858 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ceph" (OuterVolumeSpecName: "ceph") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.926688 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.932372 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.933254 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.943022 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.943450 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-inventory" (OuterVolumeSpecName: "inventory") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.943936 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.948698 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.950368 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "188aa09e-22df-4d5c-a969-8eebbf23c644" (UID: "188aa09e-22df-4d5c-a969-8eebbf23c644"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978465 5017 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978519 5017 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978530 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978545 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978559 5017 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978573 5017 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978586 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl4d8\" (UniqueName: \"kubernetes.io/projected/188aa09e-22df-4d5c-a969-8eebbf23c644-kube-api-access-tl4d8\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978598 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978609 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978619 5017 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:33 crc kubenswrapper[5017]: I0129 08:56:33.978628 5017 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/188aa09e-22df-4d5c-a969-8eebbf23c644-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:56:34 crc kubenswrapper[5017]: I0129 08:56:34.322947 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" Jan 29 08:56:34 crc kubenswrapper[5017]: I0129 08:56:34.327774 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr" event={"ID":"188aa09e-22df-4d5c-a969-8eebbf23c644","Type":"ContainerDied","Data":"4eba1cedce926de44b960b88a9a5c0c02d31708013cd82512f517a206f81c316"} Jan 29 08:56:34 crc kubenswrapper[5017]: I0129 08:56:34.327929 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eba1cedce926de44b960b88a9a5c0c02d31708013cd82512f517a206f81c316" Jan 29 08:56:39 crc kubenswrapper[5017]: I0129 08:56:39.317026 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:56:39 crc kubenswrapper[5017]: E0129 08:56:39.317934 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:56:52 crc kubenswrapper[5017]: I0129 08:56:52.320780 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:56:52 crc kubenswrapper[5017]: E0129 08:56:52.321847 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:57:05 crc kubenswrapper[5017]: I0129 08:57:05.317225 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:57:05 crc kubenswrapper[5017]: E0129 08:57:05.318359 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:57:05 crc kubenswrapper[5017]: E0129 08:57:05.871033 5017 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.154:49484->38.102.83.154:45933: write tcp 38.102.83.154:49484->38.102.83.154:45933: write: broken pipe Jan 29 08:57:18 crc kubenswrapper[5017]: I0129 08:57:18.316927 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:57:18 crc kubenswrapper[5017]: E0129 08:57:18.318086 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:57:33 crc kubenswrapper[5017]: I0129 08:57:33.316773 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:57:33 crc kubenswrapper[5017]: E0129 08:57:33.317896 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:57:44 crc kubenswrapper[5017]: I0129 08:57:44.323785 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:57:44 crc kubenswrapper[5017]: E0129 08:57:44.325029 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:57:59 crc kubenswrapper[5017]: I0129 08:57:59.316119 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:57:59 crc kubenswrapper[5017]: E0129 08:57:59.317275 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:58:14 crc kubenswrapper[5017]: I0129 08:58:14.325568 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:58:14 crc kubenswrapper[5017]: E0129 08:58:14.332657 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:58:27 crc kubenswrapper[5017]: I0129 08:58:27.317434 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:58:27 crc kubenswrapper[5017]: E0129 08:58:27.319709 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:58:41 crc kubenswrapper[5017]: I0129 08:58:41.317275 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:58:41 crc kubenswrapper[5017]: E0129 08:58:41.318349 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:58:55 crc kubenswrapper[5017]: I0129 08:58:55.317237 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:58:55 crc kubenswrapper[5017]: E0129 08:58:55.318482 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.631249 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wvpt7/must-gather-h46lk"] Jan 29 08:58:56 crc kubenswrapper[5017]: E0129 08:58:56.632169 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188aa09e-22df-4d5c-a969-8eebbf23c644" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.632188 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="188aa09e-22df-4d5c-a969-8eebbf23c644" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.632437 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="188aa09e-22df-4d5c-a969-8eebbf23c644" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.633957 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.639085 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wvpt7"/"default-dockercfg-qj5qw" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.639205 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wvpt7"/"kube-root-ca.crt" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.639304 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wvpt7"/"openshift-service-ca.crt" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.648487 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wvpt7/must-gather-h46lk"] Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.815981 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj9x8\" (UniqueName: \"kubernetes.io/projected/142105c7-f2f9-40d5-96ee-7b813dc6ec31-kube-api-access-tj9x8\") pod \"must-gather-h46lk\" (UID: \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\") " pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.816278 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/142105c7-f2f9-40d5-96ee-7b813dc6ec31-must-gather-output\") pod \"must-gather-h46lk\" (UID: \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\") " pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.918686 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj9x8\" (UniqueName: \"kubernetes.io/projected/142105c7-f2f9-40d5-96ee-7b813dc6ec31-kube-api-access-tj9x8\") pod \"must-gather-h46lk\" (UID: \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\") " pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.918834 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/142105c7-f2f9-40d5-96ee-7b813dc6ec31-must-gather-output\") pod \"must-gather-h46lk\" (UID: \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\") " pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.919533 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/142105c7-f2f9-40d5-96ee-7b813dc6ec31-must-gather-output\") pod \"must-gather-h46lk\" (UID: \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\") " pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.945348 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj9x8\" (UniqueName: \"kubernetes.io/projected/142105c7-f2f9-40d5-96ee-7b813dc6ec31-kube-api-access-tj9x8\") pod \"must-gather-h46lk\" (UID: \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\") " pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 08:58:56 crc kubenswrapper[5017]: I0129 08:58:56.959943 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 08:58:57 crc kubenswrapper[5017]: I0129 08:58:57.442734 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wvpt7/must-gather-h46lk"] Jan 29 08:58:57 crc kubenswrapper[5017]: I0129 08:58:57.449919 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:58:57 crc kubenswrapper[5017]: I0129 08:58:57.839714 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wvpt7/must-gather-h46lk" event={"ID":"142105c7-f2f9-40d5-96ee-7b813dc6ec31","Type":"ContainerStarted","Data":"65325bc49d10f77fd9e88f5f0a418731017383412d641b7a86acd595eeafc693"} Jan 29 08:59:04 crc kubenswrapper[5017]: I0129 08:59:04.937255 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wvpt7/must-gather-h46lk" event={"ID":"142105c7-f2f9-40d5-96ee-7b813dc6ec31","Type":"ContainerStarted","Data":"1767e2d4d326c4170880d82bce18aef3143bcc0717a176f3d4611bdeaa51bb2e"} Jan 29 08:59:04 crc kubenswrapper[5017]: I0129 08:59:04.938127 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wvpt7/must-gather-h46lk" event={"ID":"142105c7-f2f9-40d5-96ee-7b813dc6ec31","Type":"ContainerStarted","Data":"922e1ee4f13256414622f68534c86c35c22849516f165690b47fc12e0c7ecfb7"} Jan 29 08:59:04 crc kubenswrapper[5017]: I0129 08:59:04.968551 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wvpt7/must-gather-h46lk" podStartSLOduration=2.037643027 podStartE2EDuration="8.968523823s" podCreationTimestamp="2026-01-29 08:58:56 +0000 UTC" firstStartedPulling="2026-01-29 08:58:57.44965616 +0000 UTC m=+8623.824103770" lastFinishedPulling="2026-01-29 08:59:04.380536956 +0000 UTC m=+8630.754984566" observedRunningTime="2026-01-29 08:59:04.962894886 +0000 UTC m=+8631.337342496" watchObservedRunningTime="2026-01-29 08:59:04.968523823 +0000 UTC m=+8631.342971433" Jan 29 08:59:09 crc kubenswrapper[5017]: I0129 08:59:09.697215 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wvpt7/crc-debug-gd82h"] Jan 29 08:59:09 crc kubenswrapper[5017]: I0129 08:59:09.699504 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:09 crc kubenswrapper[5017]: I0129 08:59:09.848949 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-host\") pod \"crc-debug-gd82h\" (UID: \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\") " pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:09 crc kubenswrapper[5017]: I0129 08:59:09.849566 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsw6\" (UniqueName: \"kubernetes.io/projected/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-kube-api-access-4lsw6\") pod \"crc-debug-gd82h\" (UID: \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\") " pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:09 crc kubenswrapper[5017]: I0129 08:59:09.952108 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lsw6\" (UniqueName: \"kubernetes.io/projected/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-kube-api-access-4lsw6\") pod \"crc-debug-gd82h\" (UID: \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\") " pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:09 crc kubenswrapper[5017]: I0129 08:59:09.952403 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-host\") pod \"crc-debug-gd82h\" (UID: \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\") " pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:09 crc kubenswrapper[5017]: I0129 08:59:09.952590 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-host\") pod \"crc-debug-gd82h\" (UID: \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\") " pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:09 crc kubenswrapper[5017]: I0129 08:59:09.981294 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lsw6\" (UniqueName: \"kubernetes.io/projected/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-kube-api-access-4lsw6\") pod \"crc-debug-gd82h\" (UID: \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\") " pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:10 crc kubenswrapper[5017]: I0129 08:59:10.023994 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:10 crc kubenswrapper[5017]: I0129 08:59:10.316934 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:59:10 crc kubenswrapper[5017]: E0129 08:59:10.317784 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:59:11 crc kubenswrapper[5017]: I0129 08:59:11.034504 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wvpt7/crc-debug-gd82h" event={"ID":"af5ec8e8-9aea-44fb-81bd-42f695a3c83d","Type":"ContainerStarted","Data":"29f64d4c24bcada1aff9acd2949f7da421c510f245cde2008bf94538743481ea"} Jan 29 08:59:23 crc kubenswrapper[5017]: I0129 08:59:23.175825 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wvpt7/crc-debug-gd82h" event={"ID":"af5ec8e8-9aea-44fb-81bd-42f695a3c83d","Type":"ContainerStarted","Data":"f8503a32c6396260ed56bec6c7e85266d8d29aac7b2f2c500a87e0363c167c09"} Jan 29 08:59:23 crc kubenswrapper[5017]: I0129 08:59:23.205349 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wvpt7/crc-debug-gd82h" podStartSLOduration=1.94459654 podStartE2EDuration="14.205323401s" podCreationTimestamp="2026-01-29 08:59:09 +0000 UTC" firstStartedPulling="2026-01-29 08:59:10.090352687 +0000 UTC m=+8636.464800297" lastFinishedPulling="2026-01-29 08:59:22.351079548 +0000 UTC m=+8648.725527158" observedRunningTime="2026-01-29 08:59:23.198122395 +0000 UTC m=+8649.572570015" watchObservedRunningTime="2026-01-29 08:59:23.205323401 +0000 UTC m=+8649.579771001" Jan 29 08:59:24 crc kubenswrapper[5017]: I0129 08:59:24.323808 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:59:24 crc kubenswrapper[5017]: E0129 08:59:24.324123 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:59:37 crc kubenswrapper[5017]: I0129 08:59:37.317807 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:59:37 crc kubenswrapper[5017]: E0129 08:59:37.319358 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:59:48 crc kubenswrapper[5017]: I0129 08:59:48.317266 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 08:59:48 crc kubenswrapper[5017]: E0129 08:59:48.318446 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 08:59:48 crc kubenswrapper[5017]: I0129 08:59:48.472575 5017 generic.go:334] "Generic (PLEG): container finished" podID="af5ec8e8-9aea-44fb-81bd-42f695a3c83d" containerID="f8503a32c6396260ed56bec6c7e85266d8d29aac7b2f2c500a87e0363c167c09" exitCode=0 Jan 29 08:59:48 crc kubenswrapper[5017]: I0129 08:59:48.472654 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wvpt7/crc-debug-gd82h" event={"ID":"af5ec8e8-9aea-44fb-81bd-42f695a3c83d","Type":"ContainerDied","Data":"f8503a32c6396260ed56bec6c7e85266d8d29aac7b2f2c500a87e0363c167c09"} Jan 29 08:59:49 crc kubenswrapper[5017]: I0129 08:59:49.630688 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:49 crc kubenswrapper[5017]: I0129 08:59:49.675589 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wvpt7/crc-debug-gd82h"] Jan 29 08:59:49 crc kubenswrapper[5017]: I0129 08:59:49.686966 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wvpt7/crc-debug-gd82h"] Jan 29 08:59:49 crc kubenswrapper[5017]: I0129 08:59:49.734369 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lsw6\" (UniqueName: \"kubernetes.io/projected/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-kube-api-access-4lsw6\") pod \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\" (UID: \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\") " Jan 29 08:59:49 crc kubenswrapper[5017]: I0129 08:59:49.734661 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-host\") pod \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\" (UID: \"af5ec8e8-9aea-44fb-81bd-42f695a3c83d\") " Jan 29 08:59:49 crc kubenswrapper[5017]: I0129 08:59:49.734839 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-host" (OuterVolumeSpecName: "host") pod "af5ec8e8-9aea-44fb-81bd-42f695a3c83d" (UID: "af5ec8e8-9aea-44fb-81bd-42f695a3c83d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:59:49 crc kubenswrapper[5017]: I0129 08:59:49.735417 5017 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-host\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:49 crc kubenswrapper[5017]: I0129 08:59:49.750115 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-kube-api-access-4lsw6" (OuterVolumeSpecName: "kube-api-access-4lsw6") pod "af5ec8e8-9aea-44fb-81bd-42f695a3c83d" (UID: "af5ec8e8-9aea-44fb-81bd-42f695a3c83d"). InnerVolumeSpecName "kube-api-access-4lsw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:49 crc kubenswrapper[5017]: I0129 08:59:49.838059 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lsw6\" (UniqueName: \"kubernetes.io/projected/af5ec8e8-9aea-44fb-81bd-42f695a3c83d-kube-api-access-4lsw6\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:50 crc kubenswrapper[5017]: I0129 08:59:50.330020 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5ec8e8-9aea-44fb-81bd-42f695a3c83d" path="/var/lib/kubelet/pods/af5ec8e8-9aea-44fb-81bd-42f695a3c83d/volumes" Jan 29 08:59:50 crc kubenswrapper[5017]: I0129 08:59:50.492827 5017 scope.go:117] "RemoveContainer" containerID="f8503a32c6396260ed56bec6c7e85266d8d29aac7b2f2c500a87e0363c167c09" Jan 29 08:59:50 crc kubenswrapper[5017]: I0129 08:59:50.492874 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/crc-debug-gd82h" Jan 29 08:59:50 crc kubenswrapper[5017]: I0129 08:59:50.942822 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wvpt7/crc-debug-hdblh"] Jan 29 08:59:50 crc kubenswrapper[5017]: E0129 08:59:50.943403 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5ec8e8-9aea-44fb-81bd-42f695a3c83d" containerName="container-00" Jan 29 08:59:50 crc kubenswrapper[5017]: I0129 08:59:50.943423 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5ec8e8-9aea-44fb-81bd-42f695a3c83d" containerName="container-00" Jan 29 08:59:50 crc kubenswrapper[5017]: I0129 08:59:50.943669 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5ec8e8-9aea-44fb-81bd-42f695a3c83d" containerName="container-00" Jan 29 08:59:50 crc kubenswrapper[5017]: I0129 08:59:50.944555 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 08:59:51 crc kubenswrapper[5017]: I0129 08:59:51.064061 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7j8q\" (UniqueName: \"kubernetes.io/projected/7764826c-bc97-474a-a411-c9c3b7d7c584-kube-api-access-n7j8q\") pod \"crc-debug-hdblh\" (UID: \"7764826c-bc97-474a-a411-c9c3b7d7c584\") " pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 08:59:51 crc kubenswrapper[5017]: I0129 08:59:51.064185 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7764826c-bc97-474a-a411-c9c3b7d7c584-host\") pod \"crc-debug-hdblh\" (UID: \"7764826c-bc97-474a-a411-c9c3b7d7c584\") " pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 08:59:51 crc kubenswrapper[5017]: I0129 08:59:51.167048 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7j8q\" (UniqueName: \"kubernetes.io/projected/7764826c-bc97-474a-a411-c9c3b7d7c584-kube-api-access-n7j8q\") pod \"crc-debug-hdblh\" (UID: \"7764826c-bc97-474a-a411-c9c3b7d7c584\") " pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 08:59:51 crc kubenswrapper[5017]: I0129 08:59:51.167165 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7764826c-bc97-474a-a411-c9c3b7d7c584-host\") pod \"crc-debug-hdblh\" (UID: \"7764826c-bc97-474a-a411-c9c3b7d7c584\") " pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 08:59:51 crc kubenswrapper[5017]: I0129 08:59:51.167417 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7764826c-bc97-474a-a411-c9c3b7d7c584-host\") pod \"crc-debug-hdblh\" (UID: \"7764826c-bc97-474a-a411-c9c3b7d7c584\") " pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 08:59:51 crc kubenswrapper[5017]: I0129 08:59:51.191013 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7j8q\" (UniqueName: \"kubernetes.io/projected/7764826c-bc97-474a-a411-c9c3b7d7c584-kube-api-access-n7j8q\") pod \"crc-debug-hdblh\" (UID: \"7764826c-bc97-474a-a411-c9c3b7d7c584\") " pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 08:59:51 crc kubenswrapper[5017]: I0129 08:59:51.266834 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 08:59:51 crc kubenswrapper[5017]: I0129 08:59:51.505356 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wvpt7/crc-debug-hdblh" event={"ID":"7764826c-bc97-474a-a411-c9c3b7d7c584","Type":"ContainerStarted","Data":"5a0be5381a72d6944c1a86a54837de2a30d50d6cb165864298bc27a9671cc2cf"} Jan 29 08:59:52 crc kubenswrapper[5017]: I0129 08:59:52.519752 5017 generic.go:334] "Generic (PLEG): container finished" podID="7764826c-bc97-474a-a411-c9c3b7d7c584" containerID="dedae6b0b614891e8d2cffc7c4aa2e0678176e358d29be384dde142d14761273" exitCode=1 Jan 29 08:59:52 crc kubenswrapper[5017]: I0129 08:59:52.519870 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wvpt7/crc-debug-hdblh" event={"ID":"7764826c-bc97-474a-a411-c9c3b7d7c584","Type":"ContainerDied","Data":"dedae6b0b614891e8d2cffc7c4aa2e0678176e358d29be384dde142d14761273"} Jan 29 08:59:52 crc kubenswrapper[5017]: I0129 08:59:52.566646 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wvpt7/crc-debug-hdblh"] Jan 29 08:59:52 crc kubenswrapper[5017]: I0129 08:59:52.578248 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wvpt7/crc-debug-hdblh"] Jan 29 08:59:53 crc kubenswrapper[5017]: I0129 08:59:53.672831 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 08:59:53 crc kubenswrapper[5017]: I0129 08:59:53.726912 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7764826c-bc97-474a-a411-c9c3b7d7c584-host\") pod \"7764826c-bc97-474a-a411-c9c3b7d7c584\" (UID: \"7764826c-bc97-474a-a411-c9c3b7d7c584\") " Jan 29 08:59:53 crc kubenswrapper[5017]: I0129 08:59:53.727004 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7j8q\" (UniqueName: \"kubernetes.io/projected/7764826c-bc97-474a-a411-c9c3b7d7c584-kube-api-access-n7j8q\") pod \"7764826c-bc97-474a-a411-c9c3b7d7c584\" (UID: \"7764826c-bc97-474a-a411-c9c3b7d7c584\") " Jan 29 08:59:53 crc kubenswrapper[5017]: I0129 08:59:53.727169 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7764826c-bc97-474a-a411-c9c3b7d7c584-host" (OuterVolumeSpecName: "host") pod "7764826c-bc97-474a-a411-c9c3b7d7c584" (UID: "7764826c-bc97-474a-a411-c9c3b7d7c584"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:59:53 crc kubenswrapper[5017]: I0129 08:59:53.727704 5017 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7764826c-bc97-474a-a411-c9c3b7d7c584-host\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:53 crc kubenswrapper[5017]: I0129 08:59:53.744367 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7764826c-bc97-474a-a411-c9c3b7d7c584-kube-api-access-n7j8q" (OuterVolumeSpecName: "kube-api-access-n7j8q") pod "7764826c-bc97-474a-a411-c9c3b7d7c584" (UID: "7764826c-bc97-474a-a411-c9c3b7d7c584"). InnerVolumeSpecName "kube-api-access-n7j8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:53 crc kubenswrapper[5017]: I0129 08:59:53.830695 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7j8q\" (UniqueName: \"kubernetes.io/projected/7764826c-bc97-474a-a411-c9c3b7d7c584-kube-api-access-n7j8q\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:54 crc kubenswrapper[5017]: I0129 08:59:54.331380 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7764826c-bc97-474a-a411-c9c3b7d7c584" path="/var/lib/kubelet/pods/7764826c-bc97-474a-a411-c9c3b7d7c584/volumes" Jan 29 08:59:54 crc kubenswrapper[5017]: I0129 08:59:54.558820 5017 scope.go:117] "RemoveContainer" containerID="dedae6b0b614891e8d2cffc7c4aa2e0678176e358d29be384dde142d14761273" Jan 29 08:59:54 crc kubenswrapper[5017]: I0129 08:59:54.559036 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/crc-debug-hdblh" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.166615 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x"] Jan 29 09:00:00 crc kubenswrapper[5017]: E0129 09:00:00.168101 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7764826c-bc97-474a-a411-c9c3b7d7c584" containerName="container-00" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.168123 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="7764826c-bc97-474a-a411-c9c3b7d7c584" containerName="container-00" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.168398 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="7764826c-bc97-474a-a411-c9c3b7d7c584" containerName="container-00" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.169471 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.175585 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.175911 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.198707 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x"] Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.280869 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jn55\" (UniqueName: \"kubernetes.io/projected/ab617844-91b5-47ce-82a8-8d9c65a891bc-kube-api-access-5jn55\") pod \"collect-profiles-29494620-p2x2x\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.281046 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab617844-91b5-47ce-82a8-8d9c65a891bc-config-volume\") pod \"collect-profiles-29494620-p2x2x\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.281269 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab617844-91b5-47ce-82a8-8d9c65a891bc-secret-volume\") pod \"collect-profiles-29494620-p2x2x\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.383980 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jn55\" (UniqueName: \"kubernetes.io/projected/ab617844-91b5-47ce-82a8-8d9c65a891bc-kube-api-access-5jn55\") pod \"collect-profiles-29494620-p2x2x\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.384065 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab617844-91b5-47ce-82a8-8d9c65a891bc-config-volume\") pod \"collect-profiles-29494620-p2x2x\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.384132 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab617844-91b5-47ce-82a8-8d9c65a891bc-secret-volume\") pod \"collect-profiles-29494620-p2x2x\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.385399 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab617844-91b5-47ce-82a8-8d9c65a891bc-config-volume\") pod \"collect-profiles-29494620-p2x2x\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.393866 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab617844-91b5-47ce-82a8-8d9c65a891bc-secret-volume\") pod \"collect-profiles-29494620-p2x2x\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.407043 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jn55\" (UniqueName: \"kubernetes.io/projected/ab617844-91b5-47ce-82a8-8d9c65a891bc-kube-api-access-5jn55\") pod \"collect-profiles-29494620-p2x2x\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:00 crc kubenswrapper[5017]: I0129 09:00:00.499400 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:01 crc kubenswrapper[5017]: I0129 09:00:01.376188 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x"] Jan 29 09:00:01 crc kubenswrapper[5017]: I0129 09:00:01.662832 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" event={"ID":"ab617844-91b5-47ce-82a8-8d9c65a891bc","Type":"ContainerStarted","Data":"f6c43f0243e6273f0ecc40fd0e5a56b15d7cc505cd1b3979c8ef077172ef14d2"} Jan 29 09:00:01 crc kubenswrapper[5017]: I0129 09:00:01.663304 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" event={"ID":"ab617844-91b5-47ce-82a8-8d9c65a891bc","Type":"ContainerStarted","Data":"92a1e93dddbde596b52a2c9e5ede7b978b7a4454e094766b3fa900ece41b5ce0"} Jan 29 09:00:01 crc kubenswrapper[5017]: I0129 09:00:01.691508 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" podStartSLOduration=1.691473131 podStartE2EDuration="1.691473131s" podCreationTimestamp="2026-01-29 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:01.679335404 +0000 UTC m=+8688.053783034" watchObservedRunningTime="2026-01-29 09:00:01.691473131 +0000 UTC m=+8688.065920741" Jan 29 09:00:02 crc kubenswrapper[5017]: I0129 09:00:02.677321 5017 generic.go:334] "Generic (PLEG): container finished" podID="ab617844-91b5-47ce-82a8-8d9c65a891bc" containerID="f6c43f0243e6273f0ecc40fd0e5a56b15d7cc505cd1b3979c8ef077172ef14d2" exitCode=0 Jan 29 09:00:02 crc kubenswrapper[5017]: I0129 09:00:02.677405 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" event={"ID":"ab617844-91b5-47ce-82a8-8d9c65a891bc","Type":"ContainerDied","Data":"f6c43f0243e6273f0ecc40fd0e5a56b15d7cc505cd1b3979c8ef077172ef14d2"} Jan 29 09:00:03 crc kubenswrapper[5017]: I0129 09:00:03.316529 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 09:00:03 crc kubenswrapper[5017]: E0129 09:00:03.317349 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.136371 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.181361 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab617844-91b5-47ce-82a8-8d9c65a891bc-config-volume\") pod \"ab617844-91b5-47ce-82a8-8d9c65a891bc\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.182414 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab617844-91b5-47ce-82a8-8d9c65a891bc-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab617844-91b5-47ce-82a8-8d9c65a891bc" (UID: "ab617844-91b5-47ce-82a8-8d9c65a891bc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.183335 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab617844-91b5-47ce-82a8-8d9c65a891bc-secret-volume\") pod \"ab617844-91b5-47ce-82a8-8d9c65a891bc\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.183662 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jn55\" (UniqueName: \"kubernetes.io/projected/ab617844-91b5-47ce-82a8-8d9c65a891bc-kube-api-access-5jn55\") pod \"ab617844-91b5-47ce-82a8-8d9c65a891bc\" (UID: \"ab617844-91b5-47ce-82a8-8d9c65a891bc\") " Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.185376 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab617844-91b5-47ce-82a8-8d9c65a891bc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.193498 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab617844-91b5-47ce-82a8-8d9c65a891bc-kube-api-access-5jn55" (OuterVolumeSpecName: "kube-api-access-5jn55") pod "ab617844-91b5-47ce-82a8-8d9c65a891bc" (UID: "ab617844-91b5-47ce-82a8-8d9c65a891bc"). InnerVolumeSpecName "kube-api-access-5jn55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.194507 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab617844-91b5-47ce-82a8-8d9c65a891bc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab617844-91b5-47ce-82a8-8d9c65a891bc" (UID: "ab617844-91b5-47ce-82a8-8d9c65a891bc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.287394 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab617844-91b5-47ce-82a8-8d9c65a891bc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.287438 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jn55\" (UniqueName: \"kubernetes.io/projected/ab617844-91b5-47ce-82a8-8d9c65a891bc-kube-api-access-5jn55\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.450916 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6"] Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.463404 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-d7sv6"] Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.700090 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" event={"ID":"ab617844-91b5-47ce-82a8-8d9c65a891bc","Type":"ContainerDied","Data":"92a1e93dddbde596b52a2c9e5ede7b978b7a4454e094766b3fa900ece41b5ce0"} Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.700592 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a1e93dddbde596b52a2c9e5ede7b978b7a4454e094766b3fa900ece41b5ce0" Jan 29 09:00:04 crc kubenswrapper[5017]: I0129 09:00:04.700513 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-p2x2x" Jan 29 09:00:06 crc kubenswrapper[5017]: I0129 09:00:06.332133 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b4245b-8fec-40d1-bfca-d395a35a56e0" path="/var/lib/kubelet/pods/70b4245b-8fec-40d1-bfca-d395a35a56e0/volumes" Jan 29 09:00:14 crc kubenswrapper[5017]: I0129 09:00:14.337709 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 09:00:14 crc kubenswrapper[5017]: E0129 09:00:14.338930 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:00:29 crc kubenswrapper[5017]: I0129 09:00:29.316473 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 09:00:29 crc kubenswrapper[5017]: E0129 09:00:29.317652 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:00:38 crc kubenswrapper[5017]: I0129 09:00:38.188381 5017 scope.go:117] "RemoveContainer" containerID="df3f82fb78012465c08d9a51ec6beaa6acb99d576c87ae5f3bb625f8d4eae19e" Jan 29 09:00:42 crc kubenswrapper[5017]: I0129 09:00:42.316095 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 09:00:42 crc kubenswrapper[5017]: E0129 09:00:42.317242 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:00:54 crc kubenswrapper[5017]: I0129 09:00:54.326527 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 09:00:54 crc kubenswrapper[5017]: E0129 09:00:54.327735 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.154346 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29494621-2lmn7"] Jan 29 09:01:00 crc kubenswrapper[5017]: E0129 09:01:00.155838 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab617844-91b5-47ce-82a8-8d9c65a891bc" containerName="collect-profiles" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.155854 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab617844-91b5-47ce-82a8-8d9c65a891bc" containerName="collect-profiles" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.156104 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab617844-91b5-47ce-82a8-8d9c65a891bc" containerName="collect-profiles" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.156907 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.178014 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tbt2\" (UniqueName: \"kubernetes.io/projected/3c72a33e-5422-4a12-a6ab-7774564229a1-kube-api-access-9tbt2\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.178180 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-combined-ca-bundle\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.178239 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-fernet-keys\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.178445 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-config-data\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.280471 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tbt2\" (UniqueName: \"kubernetes.io/projected/3c72a33e-5422-4a12-a6ab-7774564229a1-kube-api-access-9tbt2\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.280638 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-combined-ca-bundle\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.280661 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-fernet-keys\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.280775 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-config-data\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.286000 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494621-2lmn7"] Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.291108 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-combined-ca-bundle\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.292064 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-fernet-keys\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.294036 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-config-data\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.310914 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tbt2\" (UniqueName: \"kubernetes.io/projected/3c72a33e-5422-4a12-a6ab-7774564229a1-kube-api-access-9tbt2\") pod \"keystone-cron-29494621-2lmn7\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.483576 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:00 crc kubenswrapper[5017]: I0129 09:01:00.953266 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494621-2lmn7"] Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.279505 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b2wfs"] Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.283369 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.292721 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2wfs"] Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.351023 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-2lmn7" event={"ID":"3c72a33e-5422-4a12-a6ab-7774564229a1","Type":"ContainerStarted","Data":"05dcfc934180ac4f389fbb1e6f62a92d240ac2d13f6f9232e3f3a56413d8a40c"} Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.351130 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-2lmn7" event={"ID":"3c72a33e-5422-4a12-a6ab-7774564229a1","Type":"ContainerStarted","Data":"c0c79a2d14d0e59673757b5b6d5b263ac5fed385d937158c958cb94460d1d4f0"} Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.375661 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29494621-2lmn7" podStartSLOduration=1.375634102 podStartE2EDuration="1.375634102s" podCreationTimestamp="2026-01-29 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:01.374819613 +0000 UTC m=+8747.749267233" watchObservedRunningTime="2026-01-29 09:01:01.375634102 +0000 UTC m=+8747.750081712" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.413447 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdlf9\" (UniqueName: \"kubernetes.io/projected/2d396b73-5e05-4e10-be85-8fe4cce33c16-kube-api-access-pdlf9\") pod \"community-operators-b2wfs\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.413788 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-utilities\") pod \"community-operators-b2wfs\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.413846 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-catalog-content\") pod \"community-operators-b2wfs\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.516894 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-utilities\") pod \"community-operators-b2wfs\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.516984 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-catalog-content\") pod \"community-operators-b2wfs\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.517582 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-utilities\") pod \"community-operators-b2wfs\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.517585 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-catalog-content\") pod \"community-operators-b2wfs\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.518204 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdlf9\" (UniqueName: \"kubernetes.io/projected/2d396b73-5e05-4e10-be85-8fe4cce33c16-kube-api-access-pdlf9\") pod \"community-operators-b2wfs\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.543085 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdlf9\" (UniqueName: \"kubernetes.io/projected/2d396b73-5e05-4e10-be85-8fe4cce33c16-kube-api-access-pdlf9\") pod \"community-operators-b2wfs\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:01 crc kubenswrapper[5017]: I0129 09:01:01.666688 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:02 crc kubenswrapper[5017]: I0129 09:01:02.337453 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2wfs"] Jan 29 09:01:02 crc kubenswrapper[5017]: W0129 09:01:02.343200 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d396b73_5e05_4e10_be85_8fe4cce33c16.slice/crio-530767b786e08f4604dbb8814db318cad3993013ece69ff8e8512f7706865d8d WatchSource:0}: Error finding container 530767b786e08f4604dbb8814db318cad3993013ece69ff8e8512f7706865d8d: Status 404 returned error can't find the container with id 530767b786e08f4604dbb8814db318cad3993013ece69ff8e8512f7706865d8d Jan 29 09:01:02 crc kubenswrapper[5017]: I0129 09:01:02.364261 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2wfs" event={"ID":"2d396b73-5e05-4e10-be85-8fe4cce33c16","Type":"ContainerStarted","Data":"530767b786e08f4604dbb8814db318cad3993013ece69ff8e8512f7706865d8d"} Jan 29 09:01:03 crc kubenswrapper[5017]: I0129 09:01:03.376522 5017 generic.go:334] "Generic (PLEG): container finished" podID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerID="f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9" exitCode=0 Jan 29 09:01:03 crc kubenswrapper[5017]: I0129 09:01:03.376617 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2wfs" event={"ID":"2d396b73-5e05-4e10-be85-8fe4cce33c16","Type":"ContainerDied","Data":"f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9"} Jan 29 09:01:05 crc kubenswrapper[5017]: I0129 09:01:05.400254 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2wfs" event={"ID":"2d396b73-5e05-4e10-be85-8fe4cce33c16","Type":"ContainerStarted","Data":"0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36"} Jan 29 09:01:07 crc kubenswrapper[5017]: I0129 09:01:07.317510 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 09:01:07 crc kubenswrapper[5017]: E0129 09:01:07.320739 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:01:09 crc kubenswrapper[5017]: I0129 09:01:09.453448 5017 generic.go:334] "Generic (PLEG): container finished" podID="3c72a33e-5422-4a12-a6ab-7774564229a1" containerID="05dcfc934180ac4f389fbb1e6f62a92d240ac2d13f6f9232e3f3a56413d8a40c" exitCode=0 Jan 29 09:01:09 crc kubenswrapper[5017]: I0129 09:01:09.453496 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-2lmn7" event={"ID":"3c72a33e-5422-4a12-a6ab-7774564229a1","Type":"ContainerDied","Data":"05dcfc934180ac4f389fbb1e6f62a92d240ac2d13f6f9232e3f3a56413d8a40c"} Jan 29 09:01:10 crc kubenswrapper[5017]: I0129 09:01:10.466358 5017 generic.go:334] "Generic (PLEG): container finished" podID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerID="0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36" exitCode=0 Jan 29 09:01:10 crc kubenswrapper[5017]: I0129 09:01:10.467186 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2wfs" event={"ID":"2d396b73-5e05-4e10-be85-8fe4cce33c16","Type":"ContainerDied","Data":"0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36"} Jan 29 09:01:10 crc kubenswrapper[5017]: I0129 09:01:10.915191 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:10 crc kubenswrapper[5017]: I0129 09:01:10.975500 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-config-data\") pod \"3c72a33e-5422-4a12-a6ab-7774564229a1\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " Jan 29 09:01:10 crc kubenswrapper[5017]: I0129 09:01:10.975694 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tbt2\" (UniqueName: \"kubernetes.io/projected/3c72a33e-5422-4a12-a6ab-7774564229a1-kube-api-access-9tbt2\") pod \"3c72a33e-5422-4a12-a6ab-7774564229a1\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " Jan 29 09:01:10 crc kubenswrapper[5017]: I0129 09:01:10.975757 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-combined-ca-bundle\") pod \"3c72a33e-5422-4a12-a6ab-7774564229a1\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " Jan 29 09:01:10 crc kubenswrapper[5017]: I0129 09:01:10.976938 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-fernet-keys\") pod \"3c72a33e-5422-4a12-a6ab-7774564229a1\" (UID: \"3c72a33e-5422-4a12-a6ab-7774564229a1\") " Jan 29 09:01:10 crc kubenswrapper[5017]: I0129 09:01:10.986555 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c72a33e-5422-4a12-a6ab-7774564229a1-kube-api-access-9tbt2" (OuterVolumeSpecName: "kube-api-access-9tbt2") pod "3c72a33e-5422-4a12-a6ab-7774564229a1" (UID: "3c72a33e-5422-4a12-a6ab-7774564229a1"). InnerVolumeSpecName "kube-api-access-9tbt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:10 crc kubenswrapper[5017]: I0129 09:01:10.992519 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3c72a33e-5422-4a12-a6ab-7774564229a1" (UID: "3c72a33e-5422-4a12-a6ab-7774564229a1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:11 crc kubenswrapper[5017]: I0129 09:01:11.019529 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c72a33e-5422-4a12-a6ab-7774564229a1" (UID: "3c72a33e-5422-4a12-a6ab-7774564229a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:11 crc kubenswrapper[5017]: I0129 09:01:11.054717 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-config-data" (OuterVolumeSpecName: "config-data") pod "3c72a33e-5422-4a12-a6ab-7774564229a1" (UID: "3c72a33e-5422-4a12-a6ab-7774564229a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:11 crc kubenswrapper[5017]: I0129 09:01:11.079719 5017 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:11 crc kubenswrapper[5017]: I0129 09:01:11.079768 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tbt2\" (UniqueName: \"kubernetes.io/projected/3c72a33e-5422-4a12-a6ab-7774564229a1-kube-api-access-9tbt2\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:11 crc kubenswrapper[5017]: I0129 09:01:11.079786 5017 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:11 crc kubenswrapper[5017]: I0129 09:01:11.079798 5017 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c72a33e-5422-4a12-a6ab-7774564229a1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:11 crc kubenswrapper[5017]: I0129 09:01:11.480150 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-2lmn7" event={"ID":"3c72a33e-5422-4a12-a6ab-7774564229a1","Type":"ContainerDied","Data":"c0c79a2d14d0e59673757b5b6d5b263ac5fed385d937158c958cb94460d1d4f0"} Jan 29 09:01:11 crc kubenswrapper[5017]: I0129 09:01:11.480761 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c79a2d14d0e59673757b5b6d5b263ac5fed385d937158c958cb94460d1d4f0" Jan 29 09:01:11 crc kubenswrapper[5017]: I0129 09:01:11.480482 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-2lmn7" Jan 29 09:01:12 crc kubenswrapper[5017]: I0129 09:01:12.531043 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2wfs" event={"ID":"2d396b73-5e05-4e10-be85-8fe4cce33c16","Type":"ContainerStarted","Data":"b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8"} Jan 29 09:01:12 crc kubenswrapper[5017]: I0129 09:01:12.563148 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b2wfs" podStartSLOduration=3.3533969089999998 podStartE2EDuration="11.563118031s" podCreationTimestamp="2026-01-29 09:01:01 +0000 UTC" firstStartedPulling="2026-01-29 09:01:03.3789383 +0000 UTC m=+8749.753385910" lastFinishedPulling="2026-01-29 09:01:11.588659422 +0000 UTC m=+8757.963107032" observedRunningTime="2026-01-29 09:01:12.550143604 +0000 UTC m=+8758.924591214" watchObservedRunningTime="2026-01-29 09:01:12.563118031 +0000 UTC m=+8758.937565641" Jan 29 09:01:18 crc kubenswrapper[5017]: I0129 09:01:18.317605 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 09:01:18 crc kubenswrapper[5017]: E0129 09:01:18.318780 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:01:21 crc kubenswrapper[5017]: I0129 09:01:21.675227 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:21 crc kubenswrapper[5017]: I0129 09:01:21.676117 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:21 crc kubenswrapper[5017]: I0129 09:01:21.872097 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:22 crc kubenswrapper[5017]: I0129 09:01:22.704655 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:22 crc kubenswrapper[5017]: I0129 09:01:22.758820 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2wfs"] Jan 29 09:01:24 crc kubenswrapper[5017]: I0129 09:01:24.669078 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b2wfs" podUID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerName="registry-server" containerID="cri-o://b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8" gracePeriod=2 Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.222212 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.375826 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdlf9\" (UniqueName: \"kubernetes.io/projected/2d396b73-5e05-4e10-be85-8fe4cce33c16-kube-api-access-pdlf9\") pod \"2d396b73-5e05-4e10-be85-8fe4cce33c16\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.376425 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-utilities\") pod \"2d396b73-5e05-4e10-be85-8fe4cce33c16\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.376615 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-catalog-content\") pod \"2d396b73-5e05-4e10-be85-8fe4cce33c16\" (UID: \"2d396b73-5e05-4e10-be85-8fe4cce33c16\") " Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.377359 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-utilities" (OuterVolumeSpecName: "utilities") pod "2d396b73-5e05-4e10-be85-8fe4cce33c16" (UID: "2d396b73-5e05-4e10-be85-8fe4cce33c16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.384368 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d396b73-5e05-4e10-be85-8fe4cce33c16-kube-api-access-pdlf9" (OuterVolumeSpecName: "kube-api-access-pdlf9") pod "2d396b73-5e05-4e10-be85-8fe4cce33c16" (UID: "2d396b73-5e05-4e10-be85-8fe4cce33c16"). InnerVolumeSpecName "kube-api-access-pdlf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.442432 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d396b73-5e05-4e10-be85-8fe4cce33c16" (UID: "2d396b73-5e05-4e10-be85-8fe4cce33c16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.479936 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdlf9\" (UniqueName: \"kubernetes.io/projected/2d396b73-5e05-4e10-be85-8fe4cce33c16-kube-api-access-pdlf9\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.480017 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.480030 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d396b73-5e05-4e10-be85-8fe4cce33c16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.683036 5017 generic.go:334] "Generic (PLEG): container finished" podID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerID="b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8" exitCode=0 Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.683093 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2wfs" event={"ID":"2d396b73-5e05-4e10-be85-8fe4cce33c16","Type":"ContainerDied","Data":"b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8"} Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.683130 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2wfs" event={"ID":"2d396b73-5e05-4e10-be85-8fe4cce33c16","Type":"ContainerDied","Data":"530767b786e08f4604dbb8814db318cad3993013ece69ff8e8512f7706865d8d"} Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.683161 5017 scope.go:117] "RemoveContainer" containerID="b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.683162 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2wfs" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.706377 5017 scope.go:117] "RemoveContainer" containerID="0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.730846 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2wfs"] Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.746267 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b2wfs"] Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.760094 5017 scope.go:117] "RemoveContainer" containerID="f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.785352 5017 scope.go:117] "RemoveContainer" containerID="b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8" Jan 29 09:01:25 crc kubenswrapper[5017]: E0129 09:01:25.789819 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8\": container with ID starting with b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8 not found: ID does not exist" containerID="b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.789890 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8"} err="failed to get container status \"b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8\": rpc error: code = NotFound desc = could not find container \"b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8\": container with ID starting with b978564dff858f2c63340bd7865cdcf8fa621bb38bc6d5b74c9797befef548f8 not found: ID does not exist" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.789925 5017 scope.go:117] "RemoveContainer" containerID="0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36" Jan 29 09:01:25 crc kubenswrapper[5017]: E0129 09:01:25.793224 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36\": container with ID starting with 0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36 not found: ID does not exist" containerID="0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.793297 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36"} err="failed to get container status \"0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36\": rpc error: code = NotFound desc = could not find container \"0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36\": container with ID starting with 0f7a94e1af730ddfbd25b6aff9d3239cfe574767f8b3a41bff6f3793deb44f36 not found: ID does not exist" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.793339 5017 scope.go:117] "RemoveContainer" containerID="f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9" Jan 29 09:01:25 crc kubenswrapper[5017]: E0129 09:01:25.794044 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9\": container with ID starting with f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9 not found: ID does not exist" containerID="f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9" Jan 29 09:01:25 crc kubenswrapper[5017]: I0129 09:01:25.794195 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9"} err="failed to get container status \"f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9\": rpc error: code = NotFound desc = could not find container \"f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9\": container with ID starting with f5e55d3740f4e78ee0dac799fefae22bd30559b66ec2cbc333d85b275e089fb9 not found: ID does not exist" Jan 29 09:01:26 crc kubenswrapper[5017]: I0129 09:01:26.341313 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d396b73-5e05-4e10-be85-8fe4cce33c16" path="/var/lib/kubelet/pods/2d396b73-5e05-4e10-be85-8fe4cce33c16/volumes" Jan 29 09:01:33 crc kubenswrapper[5017]: I0129 09:01:33.317089 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 09:01:33 crc kubenswrapper[5017]: I0129 09:01:33.762791 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"f271fa69073cc0149b78a2dbb3887d13cb9a0054b8cdd1365d2b12d3284bad46"} Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.880020 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-96v8q"] Jan 29 09:02:41 crc kubenswrapper[5017]: E0129 09:02:41.881753 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerName="extract-content" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.881780 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerName="extract-content" Jan 29 09:02:41 crc kubenswrapper[5017]: E0129 09:02:41.881876 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerName="extract-utilities" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.881888 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerName="extract-utilities" Jan 29 09:02:41 crc kubenswrapper[5017]: E0129 09:02:41.881908 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c72a33e-5422-4a12-a6ab-7774564229a1" containerName="keystone-cron" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.881920 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c72a33e-5422-4a12-a6ab-7774564229a1" containerName="keystone-cron" Jan 29 09:02:41 crc kubenswrapper[5017]: E0129 09:02:41.881937 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerName="registry-server" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.881944 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerName="registry-server" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.883189 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c72a33e-5422-4a12-a6ab-7774564229a1" containerName="keystone-cron" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.883232 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d396b73-5e05-4e10-be85-8fe4cce33c16" containerName="registry-server" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.885351 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.895539 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96v8q"] Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.910482 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-catalog-content\") pod \"redhat-operators-96v8q\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.910967 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8cgq\" (UniqueName: \"kubernetes.io/projected/e814c0da-f5da-4ded-8232-3582377aff11-kube-api-access-t8cgq\") pod \"redhat-operators-96v8q\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:41 crc kubenswrapper[5017]: I0129 09:02:41.911088 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-utilities\") pod \"redhat-operators-96v8q\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:42 crc kubenswrapper[5017]: I0129 09:02:42.013123 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-utilities\") pod \"redhat-operators-96v8q\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:42 crc kubenswrapper[5017]: I0129 09:02:42.013581 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-catalog-content\") pod \"redhat-operators-96v8q\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:42 crc kubenswrapper[5017]: I0129 09:02:42.013859 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-utilities\") pod \"redhat-operators-96v8q\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:42 crc kubenswrapper[5017]: I0129 09:02:42.013883 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8cgq\" (UniqueName: \"kubernetes.io/projected/e814c0da-f5da-4ded-8232-3582377aff11-kube-api-access-t8cgq\") pod \"redhat-operators-96v8q\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:42 crc kubenswrapper[5017]: I0129 09:02:42.014015 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-catalog-content\") pod \"redhat-operators-96v8q\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:42 crc kubenswrapper[5017]: I0129 09:02:42.037619 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8cgq\" (UniqueName: \"kubernetes.io/projected/e814c0da-f5da-4ded-8232-3582377aff11-kube-api-access-t8cgq\") pod \"redhat-operators-96v8q\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:42 crc kubenswrapper[5017]: I0129 09:02:42.239929 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:02:42 crc kubenswrapper[5017]: I0129 09:02:42.802484 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96v8q"] Jan 29 09:02:43 crc kubenswrapper[5017]: I0129 09:02:43.563579 5017 generic.go:334] "Generic (PLEG): container finished" podID="e814c0da-f5da-4ded-8232-3582377aff11" containerID="24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0" exitCode=0 Jan 29 09:02:43 crc kubenswrapper[5017]: I0129 09:02:43.563700 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96v8q" event={"ID":"e814c0da-f5da-4ded-8232-3582377aff11","Type":"ContainerDied","Data":"24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0"} Jan 29 09:02:43 crc kubenswrapper[5017]: I0129 09:02:43.564076 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96v8q" event={"ID":"e814c0da-f5da-4ded-8232-3582377aff11","Type":"ContainerStarted","Data":"f76ce188d8b6968367836a0929e34b20e99e4140a0d4d837e04726e761f05770"} Jan 29 09:02:44 crc kubenswrapper[5017]: I0129 09:02:44.576441 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96v8q" event={"ID":"e814c0da-f5da-4ded-8232-3582377aff11","Type":"ContainerStarted","Data":"786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136"} Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.259146 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hgltb"] Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.263145 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.330600 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgltb"] Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.367496 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-catalog-content\") pod \"redhat-marketplace-hgltb\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.367589 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh5q6\" (UniqueName: \"kubernetes.io/projected/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-kube-api-access-lh5q6\") pod \"redhat-marketplace-hgltb\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.367632 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-utilities\") pod \"redhat-marketplace-hgltb\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.469442 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-catalog-content\") pod \"redhat-marketplace-hgltb\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.469516 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh5q6\" (UniqueName: \"kubernetes.io/projected/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-kube-api-access-lh5q6\") pod \"redhat-marketplace-hgltb\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.469547 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-utilities\") pod \"redhat-marketplace-hgltb\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.470237 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-catalog-content\") pod \"redhat-marketplace-hgltb\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.470318 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-utilities\") pod \"redhat-marketplace-hgltb\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.501036 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh5q6\" (UniqueName: \"kubernetes.io/projected/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-kube-api-access-lh5q6\") pod \"redhat-marketplace-hgltb\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:47 crc kubenswrapper[5017]: I0129 09:02:47.594538 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:48 crc kubenswrapper[5017]: I0129 09:02:48.138610 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgltb"] Jan 29 09:02:48 crc kubenswrapper[5017]: W0129 09:02:48.143854 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd6008f7_6aaa_4c95_8a8e_3508e94fd3cd.slice/crio-044102e530f3378481571fb15fe1824f8d1a518c9469aa65fa93e42857330d71 WatchSource:0}: Error finding container 044102e530f3378481571fb15fe1824f8d1a518c9469aa65fa93e42857330d71: Status 404 returned error can't find the container with id 044102e530f3378481571fb15fe1824f8d1a518c9469aa65fa93e42857330d71 Jan 29 09:02:48 crc kubenswrapper[5017]: I0129 09:02:48.623443 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgltb" event={"ID":"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd","Type":"ContainerStarted","Data":"044102e530f3378481571fb15fe1824f8d1a518c9469aa65fa93e42857330d71"} Jan 29 09:02:49 crc kubenswrapper[5017]: I0129 09:02:49.638314 5017 generic.go:334] "Generic (PLEG): container finished" podID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerID="4a458588a074c22538e71a18122fea4fd1f9a4305abcb337fbae601985d20838" exitCode=0 Jan 29 09:02:49 crc kubenswrapper[5017]: I0129 09:02:49.638787 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgltb" event={"ID":"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd","Type":"ContainerDied","Data":"4a458588a074c22538e71a18122fea4fd1f9a4305abcb337fbae601985d20838"} Jan 29 09:02:51 crc kubenswrapper[5017]: I0129 09:02:51.673512 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgltb" event={"ID":"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd","Type":"ContainerStarted","Data":"3ec44778ee3d8faa4849f7416e2fbeeee5dc6e11dee7b39c01e0a43cab544654"} Jan 29 09:02:54 crc kubenswrapper[5017]: I0129 09:02:54.727111 5017 generic.go:334] "Generic (PLEG): container finished" podID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerID="3ec44778ee3d8faa4849f7416e2fbeeee5dc6e11dee7b39c01e0a43cab544654" exitCode=0 Jan 29 09:02:54 crc kubenswrapper[5017]: I0129 09:02:54.727190 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgltb" event={"ID":"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd","Type":"ContainerDied","Data":"3ec44778ee3d8faa4849f7416e2fbeeee5dc6e11dee7b39c01e0a43cab544654"} Jan 29 09:02:56 crc kubenswrapper[5017]: I0129 09:02:56.761659 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgltb" event={"ID":"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd","Type":"ContainerStarted","Data":"7bc35c57b139c7204a63566fa45ea2d2038f5dfb7f9b29d83455dbd20e0dc819"} Jan 29 09:02:56 crc kubenswrapper[5017]: I0129 09:02:56.800684 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hgltb" podStartSLOduration=4.238735039 podStartE2EDuration="9.800054621s" podCreationTimestamp="2026-01-29 09:02:47 +0000 UTC" firstStartedPulling="2026-01-29 09:02:49.640868948 +0000 UTC m=+8856.015316558" lastFinishedPulling="2026-01-29 09:02:55.20218853 +0000 UTC m=+8861.576636140" observedRunningTime="2026-01-29 09:02:56.782443641 +0000 UTC m=+8863.156891281" watchObservedRunningTime="2026-01-29 09:02:56.800054621 +0000 UTC m=+8863.174502231" Jan 29 09:02:57 crc kubenswrapper[5017]: I0129 09:02:57.596287 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:57 crc kubenswrapper[5017]: I0129 09:02:57.596349 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:02:57 crc kubenswrapper[5017]: I0129 09:02:57.778128 5017 generic.go:334] "Generic (PLEG): container finished" podID="e814c0da-f5da-4ded-8232-3582377aff11" containerID="786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136" exitCode=0 Jan 29 09:02:57 crc kubenswrapper[5017]: I0129 09:02:57.779558 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96v8q" event={"ID":"e814c0da-f5da-4ded-8232-3582377aff11","Type":"ContainerDied","Data":"786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136"} Jan 29 09:02:58 crc kubenswrapper[5017]: I0129 09:02:58.651748 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hgltb" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerName="registry-server" probeResult="failure" output=< Jan 29 09:02:58 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 09:02:58 crc kubenswrapper[5017]: > Jan 29 09:02:58 crc kubenswrapper[5017]: I0129 09:02:58.793002 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96v8q" event={"ID":"e814c0da-f5da-4ded-8232-3582377aff11","Type":"ContainerStarted","Data":"4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52"} Jan 29 09:02:58 crc kubenswrapper[5017]: I0129 09:02:58.822800 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-96v8q" podStartSLOduration=3.128811768 podStartE2EDuration="17.822768472s" podCreationTimestamp="2026-01-29 09:02:41 +0000 UTC" firstStartedPulling="2026-01-29 09:02:43.565773692 +0000 UTC m=+8849.940221302" lastFinishedPulling="2026-01-29 09:02:58.259730396 +0000 UTC m=+8864.634178006" observedRunningTime="2026-01-29 09:02:58.811232671 +0000 UTC m=+8865.185680311" watchObservedRunningTime="2026-01-29 09:02:58.822768472 +0000 UTC m=+8865.197216082" Jan 29 09:03:02 crc kubenswrapper[5017]: I0129 09:03:02.240299 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:03:02 crc kubenswrapper[5017]: I0129 09:03:02.241277 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:03:03 crc kubenswrapper[5017]: I0129 09:03:03.298339 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-96v8q" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="registry-server" probeResult="failure" output=< Jan 29 09:03:03 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 09:03:03 crc kubenswrapper[5017]: > Jan 29 09:03:07 crc kubenswrapper[5017]: I0129 09:03:07.649910 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:03:07 crc kubenswrapper[5017]: I0129 09:03:07.708286 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:03:07 crc kubenswrapper[5017]: I0129 09:03:07.896350 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgltb"] Jan 29 09:03:08 crc kubenswrapper[5017]: I0129 09:03:08.908136 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hgltb" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerName="registry-server" containerID="cri-o://7bc35c57b139c7204a63566fa45ea2d2038f5dfb7f9b29d83455dbd20e0dc819" gracePeriod=2 Jan 29 09:03:09 crc kubenswrapper[5017]: I0129 09:03:09.926498 5017 generic.go:334] "Generic (PLEG): container finished" podID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerID="7bc35c57b139c7204a63566fa45ea2d2038f5dfb7f9b29d83455dbd20e0dc819" exitCode=0 Jan 29 09:03:09 crc kubenswrapper[5017]: I0129 09:03:09.927664 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgltb" event={"ID":"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd","Type":"ContainerDied","Data":"7bc35c57b139c7204a63566fa45ea2d2038f5dfb7f9b29d83455dbd20e0dc819"} Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.567922 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.649542 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-catalog-content\") pod \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.650313 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh5q6\" (UniqueName: \"kubernetes.io/projected/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-kube-api-access-lh5q6\") pod \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.658908 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-kube-api-access-lh5q6" (OuterVolumeSpecName: "kube-api-access-lh5q6") pod "dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" (UID: "dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd"). InnerVolumeSpecName "kube-api-access-lh5q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.678386 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" (UID: "dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.752376 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-utilities\") pod \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\" (UID: \"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd\") " Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.753558 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.753582 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh5q6\" (UniqueName: \"kubernetes.io/projected/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-kube-api-access-lh5q6\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.754238 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-utilities" (OuterVolumeSpecName: "utilities") pod "dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" (UID: "dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.855637 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.943828 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgltb" event={"ID":"dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd","Type":"ContainerDied","Data":"044102e530f3378481571fb15fe1824f8d1a518c9469aa65fa93e42857330d71"} Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.943922 5017 scope.go:117] "RemoveContainer" containerID="7bc35c57b139c7204a63566fa45ea2d2038f5dfb7f9b29d83455dbd20e0dc819" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.943918 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgltb" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.978466 5017 scope.go:117] "RemoveContainer" containerID="3ec44778ee3d8faa4849f7416e2fbeeee5dc6e11dee7b39c01e0a43cab544654" Jan 29 09:03:10 crc kubenswrapper[5017]: I0129 09:03:10.997170 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgltb"] Jan 29 09:03:11 crc kubenswrapper[5017]: I0129 09:03:11.010900 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgltb"] Jan 29 09:03:11 crc kubenswrapper[5017]: I0129 09:03:11.028462 5017 scope.go:117] "RemoveContainer" containerID="4a458588a074c22538e71a18122fea4fd1f9a4305abcb337fbae601985d20838" Jan 29 09:03:12 crc kubenswrapper[5017]: I0129 09:03:12.334290 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" path="/var/lib/kubelet/pods/dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd/volumes" Jan 29 09:03:13 crc kubenswrapper[5017]: I0129 09:03:13.295676 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-96v8q" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="registry-server" probeResult="failure" output=< Jan 29 09:03:13 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 09:03:13 crc kubenswrapper[5017]: > Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.620061 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ncmdz"] Jan 29 09:03:19 crc kubenswrapper[5017]: E0129 09:03:19.625637 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerName="extract-content" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.625672 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerName="extract-content" Jan 29 09:03:19 crc kubenswrapper[5017]: E0129 09:03:19.625686 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerName="extract-utilities" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.625693 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerName="extract-utilities" Jan 29 09:03:19 crc kubenswrapper[5017]: E0129 09:03:19.625718 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerName="registry-server" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.625724 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerName="registry-server" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.626022 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6008f7-6aaa-4c95-8a8e-3508e94fd3cd" containerName="registry-server" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.628107 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.636058 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ncmdz"] Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.672904 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-catalog-content\") pod \"certified-operators-ncmdz\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.673117 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-utilities\") pod \"certified-operators-ncmdz\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.673153 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5dc\" (UniqueName: \"kubernetes.io/projected/4a18c984-6dea-4528-b27c-1b1967d071f3-kube-api-access-zx5dc\") pod \"certified-operators-ncmdz\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.775199 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-utilities\") pod \"certified-operators-ncmdz\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.775269 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx5dc\" (UniqueName: \"kubernetes.io/projected/4a18c984-6dea-4528-b27c-1b1967d071f3-kube-api-access-zx5dc\") pod \"certified-operators-ncmdz\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.775373 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-catalog-content\") pod \"certified-operators-ncmdz\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.776301 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-catalog-content\") pod \"certified-operators-ncmdz\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.776579 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-utilities\") pod \"certified-operators-ncmdz\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.803616 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx5dc\" (UniqueName: \"kubernetes.io/projected/4a18c984-6dea-4528-b27c-1b1967d071f3-kube-api-access-zx5dc\") pod \"certified-operators-ncmdz\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:19 crc kubenswrapper[5017]: I0129 09:03:19.965005 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:20 crc kubenswrapper[5017]: I0129 09:03:20.561231 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ncmdz"] Jan 29 09:03:21 crc kubenswrapper[5017]: I0129 09:03:21.071231 5017 generic.go:334] "Generic (PLEG): container finished" podID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerID="5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093" exitCode=0 Jan 29 09:03:21 crc kubenswrapper[5017]: I0129 09:03:21.071344 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ncmdz" event={"ID":"4a18c984-6dea-4528-b27c-1b1967d071f3","Type":"ContainerDied","Data":"5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093"} Jan 29 09:03:21 crc kubenswrapper[5017]: I0129 09:03:21.071679 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ncmdz" event={"ID":"4a18c984-6dea-4528-b27c-1b1967d071f3","Type":"ContainerStarted","Data":"4952a12d2bddb05499afb000dcc02b431cf6596eec88500a93bb2a4e4276a07a"} Jan 29 09:03:23 crc kubenswrapper[5017]: I0129 09:03:23.097901 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ncmdz" event={"ID":"4a18c984-6dea-4528-b27c-1b1967d071f3","Type":"ContainerStarted","Data":"d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193"} Jan 29 09:03:23 crc kubenswrapper[5017]: I0129 09:03:23.435102 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-96v8q" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="registry-server" probeResult="failure" output=< Jan 29 09:03:23 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 09:03:23 crc kubenswrapper[5017]: > Jan 29 09:03:30 crc kubenswrapper[5017]: I0129 09:03:30.183436 5017 generic.go:334] "Generic (PLEG): container finished" podID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerID="d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193" exitCode=0 Jan 29 09:03:30 crc kubenswrapper[5017]: I0129 09:03:30.183551 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ncmdz" event={"ID":"4a18c984-6dea-4528-b27c-1b1967d071f3","Type":"ContainerDied","Data":"d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193"} Jan 29 09:03:32 crc kubenswrapper[5017]: I0129 09:03:32.204331 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ncmdz" event={"ID":"4a18c984-6dea-4528-b27c-1b1967d071f3","Type":"ContainerStarted","Data":"077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0"} Jan 29 09:03:32 crc kubenswrapper[5017]: I0129 09:03:32.229897 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ncmdz" podStartSLOduration=3.390146095 podStartE2EDuration="13.229867444s" podCreationTimestamp="2026-01-29 09:03:19 +0000 UTC" firstStartedPulling="2026-01-29 09:03:21.075092634 +0000 UTC m=+8887.449540244" lastFinishedPulling="2026-01-29 09:03:30.914813983 +0000 UTC m=+8897.289261593" observedRunningTime="2026-01-29 09:03:32.222588466 +0000 UTC m=+8898.597036106" watchObservedRunningTime="2026-01-29 09:03:32.229867444 +0000 UTC m=+8898.604315054" Jan 29 09:03:33 crc kubenswrapper[5017]: I0129 09:03:33.297240 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-96v8q" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="registry-server" probeResult="failure" output=< Jan 29 09:03:33 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 09:03:33 crc kubenswrapper[5017]: > Jan 29 09:03:39 crc kubenswrapper[5017]: I0129 09:03:39.965709 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:39 crc kubenswrapper[5017]: I0129 09:03:39.966783 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:40 crc kubenswrapper[5017]: I0129 09:03:40.018667 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:40 crc kubenswrapper[5017]: I0129 09:03:40.350564 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:40 crc kubenswrapper[5017]: I0129 09:03:40.407561 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ncmdz"] Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.296660 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.306995 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ncmdz" podUID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerName="registry-server" containerID="cri-o://077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0" gracePeriod=2 Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.363472 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.672891 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96v8q"] Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.945144 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.974671 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-catalog-content\") pod \"4a18c984-6dea-4528-b27c-1b1967d071f3\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.974784 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx5dc\" (UniqueName: \"kubernetes.io/projected/4a18c984-6dea-4528-b27c-1b1967d071f3-kube-api-access-zx5dc\") pod \"4a18c984-6dea-4528-b27c-1b1967d071f3\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.974899 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-utilities\") pod \"4a18c984-6dea-4528-b27c-1b1967d071f3\" (UID: \"4a18c984-6dea-4528-b27c-1b1967d071f3\") " Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.979394 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-utilities" (OuterVolumeSpecName: "utilities") pod "4a18c984-6dea-4528-b27c-1b1967d071f3" (UID: "4a18c984-6dea-4528-b27c-1b1967d071f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:42 crc kubenswrapper[5017]: I0129 09:03:42.989488 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a18c984-6dea-4528-b27c-1b1967d071f3-kube-api-access-zx5dc" (OuterVolumeSpecName: "kube-api-access-zx5dc") pod "4a18c984-6dea-4528-b27c-1b1967d071f3" (UID: "4a18c984-6dea-4528-b27c-1b1967d071f3"). InnerVolumeSpecName "kube-api-access-zx5dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.034738 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a18c984-6dea-4528-b27c-1b1967d071f3" (UID: "4a18c984-6dea-4528-b27c-1b1967d071f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.079596 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.079647 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx5dc\" (UniqueName: \"kubernetes.io/projected/4a18c984-6dea-4528-b27c-1b1967d071f3-kube-api-access-zx5dc\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.079665 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a18c984-6dea-4528-b27c-1b1967d071f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.321323 5017 generic.go:334] "Generic (PLEG): container finished" podID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerID="077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0" exitCode=0 Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.321388 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ncmdz" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.321409 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ncmdz" event={"ID":"4a18c984-6dea-4528-b27c-1b1967d071f3","Type":"ContainerDied","Data":"077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0"} Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.322867 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ncmdz" event={"ID":"4a18c984-6dea-4528-b27c-1b1967d071f3","Type":"ContainerDied","Data":"4952a12d2bddb05499afb000dcc02b431cf6596eec88500a93bb2a4e4276a07a"} Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.322982 5017 scope.go:117] "RemoveContainer" containerID="077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.355771 5017 scope.go:117] "RemoveContainer" containerID="d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.360697 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ncmdz"] Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.372054 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ncmdz"] Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.380147 5017 scope.go:117] "RemoveContainer" containerID="5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.435145 5017 scope.go:117] "RemoveContainer" containerID="077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0" Jan 29 09:03:43 crc kubenswrapper[5017]: E0129 09:03:43.435679 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0\": container with ID starting with 077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0 not found: ID does not exist" containerID="077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.435731 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0"} err="failed to get container status \"077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0\": rpc error: code = NotFound desc = could not find container \"077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0\": container with ID starting with 077a7a03355b5a187eab6e88b501ad6bb3a9422cd6d375bf74f0f68d81bb01c0 not found: ID does not exist" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.435761 5017 scope.go:117] "RemoveContainer" containerID="d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193" Jan 29 09:03:43 crc kubenswrapper[5017]: E0129 09:03:43.436170 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193\": container with ID starting with d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193 not found: ID does not exist" containerID="d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.436296 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193"} err="failed to get container status \"d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193\": rpc error: code = NotFound desc = could not find container \"d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193\": container with ID starting with d9144f95dfd5ed716240f3e400e4d7336510c1d17a5244f0e200adbbf876d193 not found: ID does not exist" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.436380 5017 scope.go:117] "RemoveContainer" containerID="5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093" Jan 29 09:03:43 crc kubenswrapper[5017]: E0129 09:03:43.436703 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093\": container with ID starting with 5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093 not found: ID does not exist" containerID="5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093" Jan 29 09:03:43 crc kubenswrapper[5017]: I0129 09:03:43.436727 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093"} err="failed to get container status \"5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093\": rpc error: code = NotFound desc = could not find container \"5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093\": container with ID starting with 5018d84601a7753213524b0d2c2a861361da77e5211ac9d8d6cbcefdeaa1b093 not found: ID does not exist" Jan 29 09:03:44 crc kubenswrapper[5017]: I0129 09:03:44.331646 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a18c984-6dea-4528-b27c-1b1967d071f3" path="/var/lib/kubelet/pods/4a18c984-6dea-4528-b27c-1b1967d071f3/volumes" Jan 29 09:03:44 crc kubenswrapper[5017]: I0129 09:03:44.333560 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-96v8q" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="registry-server" containerID="cri-o://4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52" gracePeriod=2 Jan 29 09:03:44 crc kubenswrapper[5017]: I0129 09:03:44.884562 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:03:44 crc kubenswrapper[5017]: I0129 09:03:44.923995 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8cgq\" (UniqueName: \"kubernetes.io/projected/e814c0da-f5da-4ded-8232-3582377aff11-kube-api-access-t8cgq\") pod \"e814c0da-f5da-4ded-8232-3582377aff11\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " Jan 29 09:03:44 crc kubenswrapper[5017]: I0129 09:03:44.924250 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-catalog-content\") pod \"e814c0da-f5da-4ded-8232-3582377aff11\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " Jan 29 09:03:44 crc kubenswrapper[5017]: I0129 09:03:44.924335 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-utilities\") pod \"e814c0da-f5da-4ded-8232-3582377aff11\" (UID: \"e814c0da-f5da-4ded-8232-3582377aff11\") " Jan 29 09:03:44 crc kubenswrapper[5017]: I0129 09:03:44.925591 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-utilities" (OuterVolumeSpecName: "utilities") pod "e814c0da-f5da-4ded-8232-3582377aff11" (UID: "e814c0da-f5da-4ded-8232-3582377aff11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:44 crc kubenswrapper[5017]: I0129 09:03:44.958338 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e814c0da-f5da-4ded-8232-3582377aff11-kube-api-access-t8cgq" (OuterVolumeSpecName: "kube-api-access-t8cgq") pod "e814c0da-f5da-4ded-8232-3582377aff11" (UID: "e814c0da-f5da-4ded-8232-3582377aff11"). InnerVolumeSpecName "kube-api-access-t8cgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.027060 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.027099 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8cgq\" (UniqueName: \"kubernetes.io/projected/e814c0da-f5da-4ded-8232-3582377aff11-kube-api-access-t8cgq\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.106610 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e814c0da-f5da-4ded-8232-3582377aff11" (UID: "e814c0da-f5da-4ded-8232-3582377aff11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.129778 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e814c0da-f5da-4ded-8232-3582377aff11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.346364 5017 generic.go:334] "Generic (PLEG): container finished" podID="e814c0da-f5da-4ded-8232-3582377aff11" containerID="4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52" exitCode=0 Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.346454 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96v8q" event={"ID":"e814c0da-f5da-4ded-8232-3582377aff11","Type":"ContainerDied","Data":"4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52"} Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.346769 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96v8q" event={"ID":"e814c0da-f5da-4ded-8232-3582377aff11","Type":"ContainerDied","Data":"f76ce188d8b6968367836a0929e34b20e99e4140a0d4d837e04726e761f05770"} Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.346801 5017 scope.go:117] "RemoveContainer" containerID="4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.346473 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96v8q" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.368267 5017 scope.go:117] "RemoveContainer" containerID="786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.390765 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96v8q"] Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.400559 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-96v8q"] Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.408227 5017 scope.go:117] "RemoveContainer" containerID="24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.472253 5017 scope.go:117] "RemoveContainer" containerID="4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52" Jan 29 09:03:45 crc kubenswrapper[5017]: E0129 09:03:45.472664 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52\": container with ID starting with 4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52 not found: ID does not exist" containerID="4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.472705 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52"} err="failed to get container status \"4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52\": rpc error: code = NotFound desc = could not find container \"4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52\": container with ID starting with 4e496be77ecffc4f20cee5ba1264db8f9d9768bfd8c10bc2e1f2d8653940aa52 not found: ID does not exist" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.472737 5017 scope.go:117] "RemoveContainer" containerID="786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136" Jan 29 09:03:45 crc kubenswrapper[5017]: E0129 09:03:45.473442 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136\": container with ID starting with 786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136 not found: ID does not exist" containerID="786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.473492 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136"} err="failed to get container status \"786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136\": rpc error: code = NotFound desc = could not find container \"786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136\": container with ID starting with 786107f714b8a41b83a481fb61aedb841b75810cab20908262ed0d149443f136 not found: ID does not exist" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.473526 5017 scope.go:117] "RemoveContainer" containerID="24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0" Jan 29 09:03:45 crc kubenswrapper[5017]: E0129 09:03:45.474036 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0\": container with ID starting with 24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0 not found: ID does not exist" containerID="24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0" Jan 29 09:03:45 crc kubenswrapper[5017]: I0129 09:03:45.474063 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0"} err="failed to get container status \"24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0\": rpc error: code = NotFound desc = could not find container \"24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0\": container with ID starting with 24543d776a651bc87108ca4adf0d2d02f1d5be28f7d5dae8aa9e965db0109db0 not found: ID does not exist" Jan 29 09:03:46 crc kubenswrapper[5017]: I0129 09:03:46.330311 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e814c0da-f5da-4ded-8232-3582377aff11" path="/var/lib/kubelet/pods/e814c0da-f5da-4ded-8232-3582377aff11/volumes" Jan 29 09:03:56 crc kubenswrapper[5017]: I0129 09:03:56.539125 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:03:56 crc kubenswrapper[5017]: I0129 09:03:56.540026 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:04:26 crc kubenswrapper[5017]: I0129 09:04:26.539253 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:04:26 crc kubenswrapper[5017]: I0129 09:04:26.540178 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:04:56 crc kubenswrapper[5017]: I0129 09:04:56.540004 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:04:56 crc kubenswrapper[5017]: I0129 09:04:56.541883 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:04:56 crc kubenswrapper[5017]: I0129 09:04:56.542131 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 09:04:56 crc kubenswrapper[5017]: I0129 09:04:56.543067 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f271fa69073cc0149b78a2dbb3887d13cb9a0054b8cdd1365d2b12d3284bad46"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:04:56 crc kubenswrapper[5017]: I0129 09:04:56.543206 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://f271fa69073cc0149b78a2dbb3887d13cb9a0054b8cdd1365d2b12d3284bad46" gracePeriod=600 Jan 29 09:04:57 crc kubenswrapper[5017]: I0129 09:04:57.097444 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="f271fa69073cc0149b78a2dbb3887d13cb9a0054b8cdd1365d2b12d3284bad46" exitCode=0 Jan 29 09:04:57 crc kubenswrapper[5017]: I0129 09:04:57.097547 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"f271fa69073cc0149b78a2dbb3887d13cb9a0054b8cdd1365d2b12d3284bad46"} Jan 29 09:04:57 crc kubenswrapper[5017]: I0129 09:04:57.097995 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49"} Jan 29 09:04:57 crc kubenswrapper[5017]: I0129 09:04:57.098061 5017 scope.go:117] "RemoveContainer" containerID="13928f2812286efc277648be35e4bcee00e37c7735d5bc258ca6261e12d1b284" Jan 29 09:06:56 crc kubenswrapper[5017]: I0129 09:06:56.540029 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:06:56 crc kubenswrapper[5017]: I0129 09:06:56.540795 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:07:26 crc kubenswrapper[5017]: I0129 09:07:26.539487 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:07:26 crc kubenswrapper[5017]: I0129 09:07:26.540397 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:07:56 crc kubenswrapper[5017]: I0129 09:07:56.539274 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:07:56 crc kubenswrapper[5017]: I0129 09:07:56.540182 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:07:56 crc kubenswrapper[5017]: I0129 09:07:56.540238 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 09:07:56 crc kubenswrapper[5017]: I0129 09:07:56.541246 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:07:56 crc kubenswrapper[5017]: I0129 09:07:56.541297 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" gracePeriod=600 Jan 29 09:07:56 crc kubenswrapper[5017]: E0129 09:07:56.717697 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:07:57 crc kubenswrapper[5017]: I0129 09:07:57.052285 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" exitCode=0 Jan 29 09:07:57 crc kubenswrapper[5017]: I0129 09:07:57.052379 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49"} Jan 29 09:07:57 crc kubenswrapper[5017]: I0129 09:07:57.052440 5017 scope.go:117] "RemoveContainer" containerID="f271fa69073cc0149b78a2dbb3887d13cb9a0054b8cdd1365d2b12d3284bad46" Jan 29 09:07:57 crc kubenswrapper[5017]: I0129 09:07:57.053934 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:07:57 crc kubenswrapper[5017]: E0129 09:07:57.054555 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:08:12 crc kubenswrapper[5017]: I0129 09:08:12.317772 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:08:12 crc kubenswrapper[5017]: E0129 09:08:12.319195 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:08:26 crc kubenswrapper[5017]: I0129 09:08:26.317379 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:08:26 crc kubenswrapper[5017]: E0129 09:08:26.318565 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:08:38 crc kubenswrapper[5017]: I0129 09:08:38.317174 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:08:38 crc kubenswrapper[5017]: E0129 09:08:38.318400 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:08:49 crc kubenswrapper[5017]: I0129 09:08:49.316973 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:08:49 crc kubenswrapper[5017]: E0129 09:08:49.318211 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:09:03 crc kubenswrapper[5017]: I0129 09:09:03.316502 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:09:03 crc kubenswrapper[5017]: E0129 09:09:03.317761 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:09:16 crc kubenswrapper[5017]: I0129 09:09:16.317363 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:09:16 crc kubenswrapper[5017]: E0129 09:09:16.318799 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:09:27 crc kubenswrapper[5017]: I0129 09:09:27.316064 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:09:27 crc kubenswrapper[5017]: E0129 09:09:27.317428 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:09:40 crc kubenswrapper[5017]: I0129 09:09:40.316761 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:09:40 crc kubenswrapper[5017]: E0129 09:09:40.318047 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:09:52 crc kubenswrapper[5017]: I0129 09:09:52.319020 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:09:52 crc kubenswrapper[5017]: E0129 09:09:52.320775 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:10:03 crc kubenswrapper[5017]: I0129 09:10:03.317809 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:10:03 crc kubenswrapper[5017]: E0129 09:10:03.319096 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:10:17 crc kubenswrapper[5017]: I0129 09:10:17.317371 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:10:17 crc kubenswrapper[5017]: E0129 09:10:17.318554 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:10:29 crc kubenswrapper[5017]: I0129 09:10:29.316603 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:10:29 crc kubenswrapper[5017]: E0129 09:10:29.317815 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:10:42 crc kubenswrapper[5017]: I0129 09:10:42.316867 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:10:42 crc kubenswrapper[5017]: E0129 09:10:42.317906 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:10:55 crc kubenswrapper[5017]: I0129 09:10:55.316404 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:10:55 crc kubenswrapper[5017]: E0129 09:10:55.317481 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:11:09 crc kubenswrapper[5017]: I0129 09:11:09.316711 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:11:09 crc kubenswrapper[5017]: E0129 09:11:09.317802 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:11:20 crc kubenswrapper[5017]: I0129 09:11:20.316461 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:11:20 crc kubenswrapper[5017]: E0129 09:11:20.317743 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:11:32 crc kubenswrapper[5017]: I0129 09:11:32.317074 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:11:32 crc kubenswrapper[5017]: E0129 09:11:32.320046 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:11:47 crc kubenswrapper[5017]: I0129 09:11:47.317010 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:11:47 crc kubenswrapper[5017]: E0129 09:11:47.317906 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.120508 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qbfbg"] Jan 29 09:11:55 crc kubenswrapper[5017]: E0129 09:11:55.126171 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="extract-utilities" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.126231 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="extract-utilities" Jan 29 09:11:55 crc kubenswrapper[5017]: E0129 09:11:55.126256 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="registry-server" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.126266 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="registry-server" Jan 29 09:11:55 crc kubenswrapper[5017]: E0129 09:11:55.126286 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerName="extract-content" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.126293 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerName="extract-content" Jan 29 09:11:55 crc kubenswrapper[5017]: E0129 09:11:55.126318 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerName="registry-server" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.126327 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerName="registry-server" Jan 29 09:11:55 crc kubenswrapper[5017]: E0129 09:11:55.126340 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerName="extract-utilities" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.126349 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerName="extract-utilities" Jan 29 09:11:55 crc kubenswrapper[5017]: E0129 09:11:55.126380 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="extract-content" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.126387 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="extract-content" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.127755 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="e814c0da-f5da-4ded-8232-3582377aff11" containerName="registry-server" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.127880 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a18c984-6dea-4528-b27c-1b1967d071f3" containerName="registry-server" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.130313 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.137130 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbfbg"] Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.268750 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrh5\" (UniqueName: \"kubernetes.io/projected/75c7cf6d-0a49-4919-bae2-0f077d71a479-kube-api-access-mxrh5\") pod \"community-operators-qbfbg\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.268894 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-catalog-content\") pod \"community-operators-qbfbg\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.269418 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-utilities\") pod \"community-operators-qbfbg\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.371902 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-utilities\") pod \"community-operators-qbfbg\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.372098 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrh5\" (UniqueName: \"kubernetes.io/projected/75c7cf6d-0a49-4919-bae2-0f077d71a479-kube-api-access-mxrh5\") pod \"community-operators-qbfbg\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.372199 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-catalog-content\") pod \"community-operators-qbfbg\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.372686 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-utilities\") pod \"community-operators-qbfbg\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.372784 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-catalog-content\") pod \"community-operators-qbfbg\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.401804 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrh5\" (UniqueName: \"kubernetes.io/projected/75c7cf6d-0a49-4919-bae2-0f077d71a479-kube-api-access-mxrh5\") pod \"community-operators-qbfbg\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:55 crc kubenswrapper[5017]: I0129 09:11:55.473518 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:11:56 crc kubenswrapper[5017]: I0129 09:11:56.066732 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qbfbg"] Jan 29 09:11:57 crc kubenswrapper[5017]: I0129 09:11:57.044710 5017 generic.go:334] "Generic (PLEG): container finished" podID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerID="bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87" exitCode=0 Jan 29 09:11:57 crc kubenswrapper[5017]: I0129 09:11:57.044783 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbfbg" event={"ID":"75c7cf6d-0a49-4919-bae2-0f077d71a479","Type":"ContainerDied","Data":"bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87"} Jan 29 09:11:57 crc kubenswrapper[5017]: I0129 09:11:57.045279 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbfbg" event={"ID":"75c7cf6d-0a49-4919-bae2-0f077d71a479","Type":"ContainerStarted","Data":"472f8889201f8b9a7cb8eb5bc5bae5a4259299c1f5cc165f5d2c9e2a6366fc9f"} Jan 29 09:11:57 crc kubenswrapper[5017]: I0129 09:11:57.048224 5017 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:11:58 crc kubenswrapper[5017]: I0129 09:11:58.058511 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbfbg" event={"ID":"75c7cf6d-0a49-4919-bae2-0f077d71a479","Type":"ContainerStarted","Data":"3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6"} Jan 29 09:12:00 crc kubenswrapper[5017]: I0129 09:12:00.079474 5017 generic.go:334] "Generic (PLEG): container finished" podID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerID="3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6" exitCode=0 Jan 29 09:12:00 crc kubenswrapper[5017]: I0129 09:12:00.079572 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbfbg" event={"ID":"75c7cf6d-0a49-4919-bae2-0f077d71a479","Type":"ContainerDied","Data":"3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6"} Jan 29 09:12:00 crc kubenswrapper[5017]: I0129 09:12:00.629085 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_094002d7-d2d3-486f-af00-22a69e977e40/init-config-reloader/0.log" Jan 29 09:12:00 crc kubenswrapper[5017]: I0129 09:12:00.863756 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_094002d7-d2d3-486f-af00-22a69e977e40/init-config-reloader/0.log" Jan 29 09:12:00 crc kubenswrapper[5017]: I0129 09:12:00.871564 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_094002d7-d2d3-486f-af00-22a69e977e40/alertmanager/0.log" Jan 29 09:12:01 crc kubenswrapper[5017]: I0129 09:12:01.316162 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:12:01 crc kubenswrapper[5017]: E0129 09:12:01.316921 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:12:01 crc kubenswrapper[5017]: I0129 09:12:01.418883 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d689c40e-dc8c-4868-846f-5327f7e755a7/aodh-api/0.log" Jan 29 09:12:01 crc kubenswrapper[5017]: I0129 09:12:01.437845 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_094002d7-d2d3-486f-af00-22a69e977e40/config-reloader/0.log" Jan 29 09:12:01 crc kubenswrapper[5017]: I0129 09:12:01.542121 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d689c40e-dc8c-4868-846f-5327f7e755a7/aodh-evaluator/0.log" Jan 29 09:12:01 crc kubenswrapper[5017]: I0129 09:12:01.620757 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d689c40e-dc8c-4868-846f-5327f7e755a7/aodh-listener/0.log" Jan 29 09:12:01 crc kubenswrapper[5017]: I0129 09:12:01.654634 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d689c40e-dc8c-4868-846f-5327f7e755a7/aodh-notifier/0.log" Jan 29 09:12:01 crc kubenswrapper[5017]: I0129 09:12:01.903686 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-67f5fdbfcb-p7f8f_5bba72f6-364a-41ba-903b-2378cbacaef5/barbican-api/0.log" Jan 29 09:12:01 crc kubenswrapper[5017]: I0129 09:12:01.943706 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-67f5fdbfcb-p7f8f_5bba72f6-364a-41ba-903b-2378cbacaef5/barbican-api-log/0.log" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.102352 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbfbg" event={"ID":"75c7cf6d-0a49-4919-bae2-0f077d71a479","Type":"ContainerStarted","Data":"8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af"} Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.133442 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qbfbg" podStartSLOduration=3.609917367 podStartE2EDuration="7.133417314s" podCreationTimestamp="2026-01-29 09:11:55 +0000 UTC" firstStartedPulling="2026-01-29 09:11:57.04788933 +0000 UTC m=+9403.422336940" lastFinishedPulling="2026-01-29 09:12:00.571389277 +0000 UTC m=+9406.945836887" observedRunningTime="2026-01-29 09:12:02.124996748 +0000 UTC m=+9408.499444358" watchObservedRunningTime="2026-01-29 09:12:02.133417314 +0000 UTC m=+9408.507864924" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.146252 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b6df7d4b-jg8rj_50399e4a-ae5c-44e8-a7b5-32201b2be9c7/barbican-keystone-listener/0.log" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.178569 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b6df7d4b-jg8rj_50399e4a-ae5c-44e8-a7b5-32201b2be9c7/barbican-keystone-listener-log/0.log" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.239045 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-754cd5b757-bzlkt_3b563542-6a58-4b54-8345-0ddd0ce400ab/barbican-worker/0.log" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.392387 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-754cd5b757-bzlkt_3b563542-6a58-4b54-8345-0ddd0ce400ab/barbican-worker-log/0.log" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.564373 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-wth7v_a0b818ab-1e3f-47cd-b7b2-0953e0effa22/bootstrap-openstack-openstack-cell1/0.log" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.681389 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_589870c6-6dab-437b-8fa5-9bbc3106a94d/ceilometer-central-agent/0.log" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.811845 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_589870c6-6dab-437b-8fa5-9bbc3106a94d/ceilometer-notification-agent/0.log" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.884237 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_589870c6-6dab-437b-8fa5-9bbc3106a94d/proxy-httpd/0.log" Jan 29 09:12:02 crc kubenswrapper[5017]: I0129 09:12:02.967799 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_589870c6-6dab-437b-8fa5-9bbc3106a94d/sg-core/0.log" Jan 29 09:12:03 crc kubenswrapper[5017]: I0129 09:12:03.112688 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-g4vhf_f6f1a78c-d351-4293-bb4c-2e89392cce92/ceph-client-openstack-openstack-cell1/0.log" Jan 29 09:12:03 crc kubenswrapper[5017]: I0129 09:12:03.264636 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb/cinder-api/0.log" Jan 29 09:12:03 crc kubenswrapper[5017]: I0129 09:12:03.846940 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bd59e70a-ff9a-4fc6-a1c5-f939a17d1afb/cinder-api-log/0.log" Jan 29 09:12:03 crc kubenswrapper[5017]: I0129 09:12:03.877613 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_05271cbb-1748-4309-9e77-023689c72e35/probe/0.log" Jan 29 09:12:04 crc kubenswrapper[5017]: I0129 09:12:04.005271 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_05271cbb-1748-4309-9e77-023689c72e35/cinder-backup/0.log" Jan 29 09:12:04 crc kubenswrapper[5017]: I0129 09:12:04.147468 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_acef4159-bdb8-462c-92ea-e663e9ab5c0d/cinder-scheduler/0.log" Jan 29 09:12:04 crc kubenswrapper[5017]: I0129 09:12:04.202172 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_acef4159-bdb8-462c-92ea-e663e9ab5c0d/probe/0.log" Jan 29 09:12:04 crc kubenswrapper[5017]: I0129 09:12:04.380443 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_17e2c7f2-bf28-4d9c-a65a-6da99c84034b/probe/0.log" Jan 29 09:12:04 crc kubenswrapper[5017]: I0129 09:12:04.398691 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_17e2c7f2-bf28-4d9c-a65a-6da99c84034b/cinder-volume/0.log" Jan 29 09:12:04 crc kubenswrapper[5017]: I0129 09:12:04.556268 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-kp2gq_fa9d8b30-7463-4b9e-8d40-87b3091a5869/configure-network-openstack-openstack-cell1/0.log" Jan 29 09:12:04 crc kubenswrapper[5017]: I0129 09:12:04.691305 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-wws2f_abf45f94-a2ef-418f-a292-edee247b11c2/configure-os-openstack-openstack-cell1/0.log" Jan 29 09:12:04 crc kubenswrapper[5017]: I0129 09:12:04.784306 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8dfb7fbdc-cwkhx_c82d19ec-3af4-4ed8-b801-99b119fcfa53/init/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.028479 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8dfb7fbdc-cwkhx_c82d19ec-3af4-4ed8-b801-99b119fcfa53/init/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.063680 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-t2gng_af57877d-2918-40e4-b104-b1fb93121850/download-cache-openstack-openstack-cell1/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.071150 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8dfb7fbdc-cwkhx_c82d19ec-3af4-4ed8-b801-99b119fcfa53/dnsmasq-dns/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.248642 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6707e9d7-0585-4491-8e72-6203f49f9e14/glance-httpd/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.256549 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6707e9d7-0585-4491-8e72-6203f49f9e14/glance-log/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.381881 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1b16f455-e4ba-484f-96fc-78de5180d8c5/glance-log/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.385642 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1b16f455-e4ba-484f-96fc-78de5180d8c5/glance-httpd/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.475195 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.475502 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.532503 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.630124 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-69769694fd-j796t_ad3aa567-ba75-42f4-967b-95147fc35f5a/heat-api/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.778509 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-94c6ddf5-x7c8s_79df1dcf-9dd7-41cb-8543-3bee7ab44bf2/heat-cfnapi/0.log" Jan 29 09:12:05 crc kubenswrapper[5017]: I0129 09:12:05.868262 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-69874cd655-hjsnn_e851a31c-da52-4ac3-877c-c7c62d9f09f9/heat-engine/0.log" Jan 29 09:12:06 crc kubenswrapper[5017]: I0129 09:12:06.012319 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5fb5dbb6f-r76xr_1e5184f9-0919-464f-927e-2fd42d651b76/horizon/0.log" Jan 29 09:12:06 crc kubenswrapper[5017]: I0129 09:12:06.071888 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5fb5dbb6f-r76xr_1e5184f9-0919-464f-927e-2fd42d651b76/horizon-log/0.log" Jan 29 09:12:06 crc kubenswrapper[5017]: I0129 09:12:06.139993 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-d4dqv_59dc6dd8-36e1-4d52-84fd-be50d1e1b398/install-certs-openstack-openstack-cell1/0.log" Jan 29 09:12:06 crc kubenswrapper[5017]: I0129 09:12:06.201156 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:12:06 crc kubenswrapper[5017]: I0129 09:12:06.259388 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbfbg"] Jan 29 09:12:06 crc kubenswrapper[5017]: I0129 09:12:06.386989 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-jnlzx_3d0b0e7e-b6ae-444d-bfae-8f7cf997e2c7/install-os-openstack-openstack-cell1/0.log" Jan 29 09:12:07 crc kubenswrapper[5017]: I0129 09:12:07.008277 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494621-2lmn7_3c72a33e-5422-4a12-a6ab-7774564229a1/keystone-cron/0.log" Jan 29 09:12:07 crc kubenswrapper[5017]: I0129 09:12:07.009341 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7c6bdcf98c-m44f5_a15bec4c-245f-4fa6-ba0d-5efcaea6aab9/keystone-api/0.log" Jan 29 09:12:07 crc kubenswrapper[5017]: I0129 09:12:07.109267 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494561-v8j5w_46955347-1e4d-4ae1-97d7-611434a6def3/keystone-cron/0.log" Jan 29 09:12:07 crc kubenswrapper[5017]: I0129 09:12:07.257646 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f9ca9b2d-948b-412b-acf5-c98bd249d35c/kube-state-metrics/0.log" Jan 29 09:12:07 crc kubenswrapper[5017]: I0129 09:12:07.391706 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-2t5ls_1bcb6248-59f7-4d82-a9f4-ce837e9a9ef4/libvirt-openstack-openstack-cell1/0.log" Jan 29 09:12:07 crc kubenswrapper[5017]: I0129 09:12:07.628472 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_3a429f99-e77e-4a19-9293-1fcb5f49aa80/manila-api-log/0.log" Jan 29 09:12:07 crc kubenswrapper[5017]: I0129 09:12:07.695287 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_3a429f99-e77e-4a19-9293-1fcb5f49aa80/manila-api/0.log" Jan 29 09:12:07 crc kubenswrapper[5017]: I0129 09:12:07.872543 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_c25c0058-8e1c-428b-8955-21f70c22b5e5/manila-scheduler/0.log" Jan 29 09:12:07 crc kubenswrapper[5017]: I0129 09:12:07.919983 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_c25c0058-8e1c-428b-8955-21f70c22b5e5/probe/0.log" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.004392 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_d1d4579f-eefb-4043-b0a1-e3326d19bd24/manila-share/0.log" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.098052 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_d1d4579f-eefb-4043-b0a1-e3326d19bd24/probe/0.log" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.126920 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_42541727-68fc-4f89-a968-7305509acd78/adoption/0.log" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.159120 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qbfbg" podUID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerName="registry-server" containerID="cri-o://8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af" gracePeriod=2 Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.456104 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f4578d465-ntwp5_932d89c9-7469-4386-a5eb-f35774719f27/neutron-api/0.log" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.576269 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f4578d465-ntwp5_932d89c9-7469-4386-a5eb-f35774719f27/neutron-httpd/0.log" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.810238 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.821917 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-5bld2_dc8124b4-0101-4623-ab9c-9f73a0ebc7d2/neutron-dhcp-openstack-openstack-cell1/0.log" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.932070 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-catalog-content\") pod \"75c7cf6d-0a49-4919-bae2-0f077d71a479\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.932293 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-utilities\") pod \"75c7cf6d-0a49-4919-bae2-0f077d71a479\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.932366 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxrh5\" (UniqueName: \"kubernetes.io/projected/75c7cf6d-0a49-4919-bae2-0f077d71a479-kube-api-access-mxrh5\") pod \"75c7cf6d-0a49-4919-bae2-0f077d71a479\" (UID: \"75c7cf6d-0a49-4919-bae2-0f077d71a479\") " Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.934606 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-bxsq2_653c0354-71ae-4f98-87ff-df55efbd5297/neutron-metadata-openstack-openstack-cell1/0.log" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.936520 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-utilities" (OuterVolumeSpecName: "utilities") pod "75c7cf6d-0a49-4919-bae2-0f077d71a479" (UID: "75c7cf6d-0a49-4919-bae2-0f077d71a479"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:08 crc kubenswrapper[5017]: I0129 09:12:08.953821 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c7cf6d-0a49-4919-bae2-0f077d71a479-kube-api-access-mxrh5" (OuterVolumeSpecName: "kube-api-access-mxrh5") pod "75c7cf6d-0a49-4919-bae2-0f077d71a479" (UID: "75c7cf6d-0a49-4919-bae2-0f077d71a479"). InnerVolumeSpecName "kube-api-access-mxrh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.005581 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75c7cf6d-0a49-4919-bae2-0f077d71a479" (UID: "75c7cf6d-0a49-4919-bae2-0f077d71a479"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.034876 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.034918 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxrh5\" (UniqueName: \"kubernetes.io/projected/75c7cf6d-0a49-4919-bae2-0f077d71a479-kube-api-access-mxrh5\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.034927 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75c7cf6d-0a49-4919-bae2-0f077d71a479-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.100192 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-bmqnc_75a2a730-ea79-4e39-a0ca-eb1c8fac88df/neutron-sriov-openstack-openstack-cell1/0.log" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.174142 5017 generic.go:334] "Generic (PLEG): container finished" podID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerID="8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af" exitCode=0 Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.174193 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbfbg" event={"ID":"75c7cf6d-0a49-4919-bae2-0f077d71a479","Type":"ContainerDied","Data":"8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af"} Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.174226 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qbfbg" event={"ID":"75c7cf6d-0a49-4919-bae2-0f077d71a479","Type":"ContainerDied","Data":"472f8889201f8b9a7cb8eb5bc5bae5a4259299c1f5cc165f5d2c9e2a6366fc9f"} Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.174248 5017 scope.go:117] "RemoveContainer" containerID="8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.174293 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qbfbg" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.202976 5017 scope.go:117] "RemoveContainer" containerID="3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.232904 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qbfbg"] Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.247350 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qbfbg"] Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.257670 5017 scope.go:117] "RemoveContainer" containerID="bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.292244 5017 scope.go:117] "RemoveContainer" containerID="8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af" Jan 29 09:12:09 crc kubenswrapper[5017]: E0129 09:12:09.296132 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af\": container with ID starting with 8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af not found: ID does not exist" containerID="8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.296197 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af"} err="failed to get container status \"8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af\": rpc error: code = NotFound desc = could not find container \"8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af\": container with ID starting with 8fc43c6a9113d973b0f6932a2f55ffb10a339dce8f94de00131c5e8d36e171af not found: ID does not exist" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.296236 5017 scope.go:117] "RemoveContainer" containerID="3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6" Jan 29 09:12:09 crc kubenswrapper[5017]: E0129 09:12:09.296789 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6\": container with ID starting with 3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6 not found: ID does not exist" containerID="3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.296846 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6"} err="failed to get container status \"3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6\": rpc error: code = NotFound desc = could not find container \"3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6\": container with ID starting with 3f55ab921480c36185438512bee08ebbc6d7d4ff09df48807761b42c5f1575e6 not found: ID does not exist" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.296882 5017 scope.go:117] "RemoveContainer" containerID="bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87" Jan 29 09:12:09 crc kubenswrapper[5017]: E0129 09:12:09.297388 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87\": container with ID starting with bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87 not found: ID does not exist" containerID="bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.297421 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87"} err="failed to get container status \"bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87\": rpc error: code = NotFound desc = could not find container \"bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87\": container with ID starting with bcd1c049afc6221235169dc3ee79de27a0135e3a2ae81d8a8b06118d07fedd87 not found: ID does not exist" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.344471 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e8346093-03be-4dc7-a1b9-b188c05e14fc/nova-api-api/0.log" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.522107 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e8346093-03be-4dc7-a1b9-b188c05e14fc/nova-api-log/0.log" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.676185 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_65e1ea44-6179-4454-8176-911d94bfdb6a/nova-cell0-conductor-conductor/0.log" Jan 29 09:12:09 crc kubenswrapper[5017]: I0129 09:12:09.990719 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4463a745-19c8-413a-9788-a50b598ed3f5/nova-cell1-conductor-conductor/0.log" Jan 29 09:12:10 crc kubenswrapper[5017]: I0129 09:12:10.062150 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_451d181b-38b4-40f2-a648-d9b0df76fdc5/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 09:12:10 crc kubenswrapper[5017]: I0129 09:12:10.187531 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellnm5sr_188aa09e-22df-4d5c-a969-8eebbf23c644/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Jan 29 09:12:10 crc kubenswrapper[5017]: I0129 09:12:10.301382 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-lcmvs_b1265fe0-ed65-4320-b3c2-016f53ae3a71/nova-cell1-openstack-openstack-cell1/0.log" Jan 29 09:12:10 crc kubenswrapper[5017]: I0129 09:12:10.335152 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c7cf6d-0a49-4919-bae2-0f077d71a479" path="/var/lib/kubelet/pods/75c7cf6d-0a49-4919-bae2-0f077d71a479/volumes" Jan 29 09:12:10 crc kubenswrapper[5017]: I0129 09:12:10.716121 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_642b071c-b157-45af-a981-7adb4df3699d/nova-metadata-log/0.log" Jan 29 09:12:10 crc kubenswrapper[5017]: I0129 09:12:10.955647 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_642b071c-b157-45af-a981-7adb4df3699d/nova-metadata-metadata/0.log" Jan 29 09:12:11 crc kubenswrapper[5017]: I0129 09:12:11.112403 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2566d072-f480-4b77-a28f-1f91ec555597/nova-scheduler-scheduler/0.log" Jan 29 09:12:11 crc kubenswrapper[5017]: I0129 09:12:11.188919 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-86dc7d4d88-5x6wk_056489f1-b498-496c-87dc-478bc8df163d/init/0.log" Jan 29 09:12:11 crc kubenswrapper[5017]: I0129 09:12:11.407978 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-86dc7d4d88-5x6wk_056489f1-b498-496c-87dc-478bc8df163d/init/0.log" Jan 29 09:12:11 crc kubenswrapper[5017]: I0129 09:12:11.498430 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-86dc7d4d88-5x6wk_056489f1-b498-496c-87dc-478bc8df163d/octavia-api-provider-agent/0.log" Jan 29 09:12:11 crc kubenswrapper[5017]: I0129 09:12:11.657029 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-kn7gl_e3c0438e-2e9a-44d8-ac24-805d286c6256/init/0.log" Jan 29 09:12:11 crc kubenswrapper[5017]: I0129 09:12:11.744847 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-86dc7d4d88-5x6wk_056489f1-b498-496c-87dc-478bc8df163d/octavia-api/0.log" Jan 29 09:12:11 crc kubenswrapper[5017]: I0129 09:12:11.985797 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-kn7gl_e3c0438e-2e9a-44d8-ac24-805d286c6256/init/0.log" Jan 29 09:12:11 crc kubenswrapper[5017]: I0129 09:12:11.990389 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-kn7gl_e3c0438e-2e9a-44d8-ac24-805d286c6256/octavia-healthmanager/0.log" Jan 29 09:12:12 crc kubenswrapper[5017]: I0129 09:12:12.070895 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9jn9h_0389f7bb-3e21-4689-8189-71761db6d516/init/0.log" Jan 29 09:12:12 crc kubenswrapper[5017]: I0129 09:12:12.249302 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9jn9h_0389f7bb-3e21-4689-8189-71761db6d516/init/0.log" Jan 29 09:12:12 crc kubenswrapper[5017]: I0129 09:12:12.257887 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9jn9h_0389f7bb-3e21-4689-8189-71761db6d516/octavia-housekeeping/0.log" Jan 29 09:12:12 crc kubenswrapper[5017]: I0129 09:12:12.317035 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:12:12 crc kubenswrapper[5017]: E0129 09:12:12.317891 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:12:12 crc kubenswrapper[5017]: I0129 09:12:12.361453 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-rnfsn_11936bb3-0e5d-4dd4-af14-04753f575b6e/init/0.log" Jan 29 09:12:12 crc kubenswrapper[5017]: I0129 09:12:12.536753 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-rnfsn_11936bb3-0e5d-4dd4-af14-04753f575b6e/init/0.log" Jan 29 09:12:12 crc kubenswrapper[5017]: I0129 09:12:12.637240 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-rnfsn_11936bb3-0e5d-4dd4-af14-04753f575b6e/octavia-rsyslog/0.log" Jan 29 09:12:12 crc kubenswrapper[5017]: I0129 09:12:12.670419 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-k6hjh_36eed554-1fc4-4d35-a541-9a46e00e727d/init/0.log" Jan 29 09:12:12 crc kubenswrapper[5017]: I0129 09:12:12.949419 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_366e61eb-22f5-44a6-905e-b6d5e6b926b0/mysql-bootstrap/0.log" Jan 29 09:12:13 crc kubenswrapper[5017]: I0129 09:12:13.019548 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-k6hjh_36eed554-1fc4-4d35-a541-9a46e00e727d/init/0.log" Jan 29 09:12:13 crc kubenswrapper[5017]: I0129 09:12:13.200052 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-k6hjh_36eed554-1fc4-4d35-a541-9a46e00e727d/octavia-worker/0.log" Jan 29 09:12:13 crc kubenswrapper[5017]: I0129 09:12:13.863908 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_366e61eb-22f5-44a6-905e-b6d5e6b926b0/mysql-bootstrap/0.log" Jan 29 09:12:13 crc kubenswrapper[5017]: I0129 09:12:13.974851 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_366e61eb-22f5-44a6-905e-b6d5e6b926b0/galera/0.log" Jan 29 09:12:14 crc kubenswrapper[5017]: I0129 09:12:14.050625 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01c81767-cb91-41ba-b305-3aaff087606c/mysql-bootstrap/0.log" Jan 29 09:12:14 crc kubenswrapper[5017]: I0129 09:12:14.193821 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01c81767-cb91-41ba-b305-3aaff087606c/mysql-bootstrap/0.log" Jan 29 09:12:14 crc kubenswrapper[5017]: I0129 09:12:14.294139 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01c81767-cb91-41ba-b305-3aaff087606c/galera/0.log" Jan 29 09:12:14 crc kubenswrapper[5017]: I0129 09:12:14.340290 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_caee0a70-c87e-4b2d-b9ca-8f949b81540e/openstackclient/0.log" Jan 29 09:12:14 crc kubenswrapper[5017]: I0129 09:12:14.567046 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6xjl6_fec6292f-122b-4f11-a3c0-a4d0bdb0303f/openstack-network-exporter/0.log" Jan 29 09:12:14 crc kubenswrapper[5017]: I0129 09:12:14.616445 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5mc4s_2b894593-0963-4476-8329-daea9c22707a/ovsdb-server-init/0.log" Jan 29 09:12:14 crc kubenswrapper[5017]: I0129 09:12:14.873664 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5mc4s_2b894593-0963-4476-8329-daea9c22707a/ovsdb-server-init/0.log" Jan 29 09:12:14 crc kubenswrapper[5017]: I0129 09:12:14.897956 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5mc4s_2b894593-0963-4476-8329-daea9c22707a/ovsdb-server/0.log" Jan 29 09:12:14 crc kubenswrapper[5017]: I0129 09:12:14.910677 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5mc4s_2b894593-0963-4476-8329-daea9c22707a/ovs-vswitchd/0.log" Jan 29 09:12:15 crc kubenswrapper[5017]: I0129 09:12:15.119296 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-w64dv_1000feb0-a866-42c2-974e-cd95329589e2/ovn-controller/0.log" Jan 29 09:12:15 crc kubenswrapper[5017]: I0129 09:12:15.172446 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_e31ff6be-5757-4479-85a9-1fe9a40834a3/adoption/0.log" Jan 29 09:12:15 crc kubenswrapper[5017]: I0129 09:12:15.621221 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ed6df609-936e-4744-b4e8-d1ad883e850d/openstack-network-exporter/0.log" Jan 29 09:12:16 crc kubenswrapper[5017]: I0129 09:12:16.049744 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ed6df609-936e-4744-b4e8-d1ad883e850d/ovn-northd/0.log" Jan 29 09:12:16 crc kubenswrapper[5017]: I0129 09:12:16.135753 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-k57sb_d5e0be93-aeff-4f04-b902-e137a19e5585/ovn-openstack-openstack-cell1/0.log" Jan 29 09:12:16 crc kubenswrapper[5017]: I0129 09:12:16.248164 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_eec52b57-cfbe-49e2-aa22-112f785bff7c/openstack-network-exporter/0.log" Jan 29 09:12:16 crc kubenswrapper[5017]: I0129 09:12:16.381639 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f/openstack-network-exporter/0.log" Jan 29 09:12:16 crc kubenswrapper[5017]: I0129 09:12:16.451869 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_02b0ddcd-bef7-4916-957c-f6b1aaa4fc4f/ovsdbserver-nb/0.log" Jan 29 09:12:16 crc kubenswrapper[5017]: I0129 09:12:16.457141 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_eec52b57-cfbe-49e2-aa22-112f785bff7c/ovsdbserver-nb/0.log" Jan 29 09:12:16 crc kubenswrapper[5017]: I0129 09:12:16.730620 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_eac0b731-771a-4164-a5a1-f17bad61fb30/ovsdbserver-nb/0.log" Jan 29 09:12:16 crc kubenswrapper[5017]: I0129 09:12:16.742385 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_eac0b731-771a-4164-a5a1-f17bad61fb30/openstack-network-exporter/0.log" Jan 29 09:12:16 crc kubenswrapper[5017]: I0129 09:12:16.929610 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fc9b2718-9b96-4d4b-ade8-5394392229f9/openstack-network-exporter/0.log" Jan 29 09:12:17 crc kubenswrapper[5017]: I0129 09:12:17.026658 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fc9b2718-9b96-4d4b-ade8-5394392229f9/ovsdbserver-sb/0.log" Jan 29 09:12:17 crc kubenswrapper[5017]: I0129 09:12:17.065374 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_0a23012e-ce8c-4a9a-b812-f5fa91f22623/openstack-network-exporter/0.log" Jan 29 09:12:17 crc kubenswrapper[5017]: I0129 09:12:17.209022 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_0a23012e-ce8c-4a9a-b812-f5fa91f22623/ovsdbserver-sb/0.log" Jan 29 09:12:17 crc kubenswrapper[5017]: I0129 09:12:17.303124 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_a10eac92-4703-47fd-b022-0dcca527b076/openstack-network-exporter/0.log" Jan 29 09:12:17 crc kubenswrapper[5017]: I0129 09:12:17.309200 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_a10eac92-4703-47fd-b022-0dcca527b076/ovsdbserver-sb/0.log" Jan 29 09:12:17 crc kubenswrapper[5017]: I0129 09:12:17.620391 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c755b4dd6-c5f4t_ac7c779a-6c7f-4f09-abe0-a42882712730/placement-log/0.log" Jan 29 09:12:17 crc kubenswrapper[5017]: I0129 09:12:17.734218 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c755b4dd6-c5f4t_ac7c779a-6c7f-4f09-abe0-a42882712730/placement-api/0.log" Jan 29 09:12:17 crc kubenswrapper[5017]: I0129 09:12:17.794061 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cdb8qw_0bba6639-7539-4a6f-b045-7cbf1679c047/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 29 09:12:17 crc kubenswrapper[5017]: I0129 09:12:17.950036 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c02aa22b-1d85-4478-85b3-1b929165d41c/init-config-reloader/0.log" Jan 29 09:12:18 crc kubenswrapper[5017]: I0129 09:12:18.151523 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c02aa22b-1d85-4478-85b3-1b929165d41c/prometheus/0.log" Jan 29 09:12:18 crc kubenswrapper[5017]: I0129 09:12:18.187680 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c02aa22b-1d85-4478-85b3-1b929165d41c/init-config-reloader/0.log" Jan 29 09:12:18 crc kubenswrapper[5017]: I0129 09:12:18.189415 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c02aa22b-1d85-4478-85b3-1b929165d41c/config-reloader/0.log" Jan 29 09:12:18 crc kubenswrapper[5017]: I0129 09:12:18.226347 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_c02aa22b-1d85-4478-85b3-1b929165d41c/thanos-sidecar/0.log" Jan 29 09:12:18 crc kubenswrapper[5017]: I0129 09:12:18.418731 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2105bda0-b02e-49b2-9024-5ae1c94a9753/setup-container/0.log" Jan 29 09:12:18 crc kubenswrapper[5017]: I0129 09:12:18.695119 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_13a33bc7-e8c6-4b03-820c-33912797c525/memcached/0.log" Jan 29 09:12:18 crc kubenswrapper[5017]: I0129 09:12:18.764419 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aba4a07e-d542-4c62-b2bf-414140c4715f/setup-container/0.log" Jan 29 09:12:18 crc kubenswrapper[5017]: I0129 09:12:18.779651 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2105bda0-b02e-49b2-9024-5ae1c94a9753/setup-container/0.log" Jan 29 09:12:18 crc kubenswrapper[5017]: I0129 09:12:18.815947 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2105bda0-b02e-49b2-9024-5ae1c94a9753/rabbitmq/0.log" Jan 29 09:12:19 crc kubenswrapper[5017]: I0129 09:12:19.065035 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aba4a07e-d542-4c62-b2bf-414140c4715f/setup-container/0.log" Jan 29 09:12:19 crc kubenswrapper[5017]: I0129 09:12:19.126740 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-r7qwl_2a087e53-8f2b-4f84-a483-80dab07ccfb9/reboot-os-openstack-openstack-cell1/0.log" Jan 29 09:12:19 crc kubenswrapper[5017]: I0129 09:12:19.314590 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-99nxk_a6637a8b-efe8-4fa4-995c-1d0023c627ce/run-os-openstack-openstack-cell1/0.log" Jan 29 09:12:19 crc kubenswrapper[5017]: I0129 09:12:19.433020 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-9fpln_eef69ab1-52e5-4f13-84fe-f5cc49697fcb/ssh-known-hosts-openstack/0.log" Jan 29 09:12:19 crc kubenswrapper[5017]: I0129 09:12:19.998223 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-tnvqv_36d2e4dd-7fea-48d6-92f9-93f3e02c0e09/telemetry-openstack-openstack-cell1/0.log" Jan 29 09:12:20 crc kubenswrapper[5017]: I0129 09:12:20.142168 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-l7pkb_1c67db27-194c-43dd-ab29-0461e44ba417/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Jan 29 09:12:20 crc kubenswrapper[5017]: I0129 09:12:20.282837 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-xt4kz_81bdc4f3-baae-455e-83e0-3dc111b608d2/validate-network-openstack-openstack-cell1/0.log" Jan 29 09:12:21 crc kubenswrapper[5017]: I0129 09:12:21.721849 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aba4a07e-d542-4c62-b2bf-414140c4715f/rabbitmq/0.log" Jan 29 09:12:27 crc kubenswrapper[5017]: I0129 09:12:27.317473 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:12:27 crc kubenswrapper[5017]: E0129 09:12:27.318838 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:12:39 crc kubenswrapper[5017]: I0129 09:12:39.315967 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:12:39 crc kubenswrapper[5017]: E0129 09:12:39.317025 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:12:48 crc kubenswrapper[5017]: I0129 09:12:48.699687 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-m9htc_0cf68843-4944-46e5-940e-03273a49fd0a/manager/0.log" Jan 29 09:12:48 crc kubenswrapper[5017]: I0129 09:12:48.968778 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-h4xwv_326882c7-bd9e-4141-95c3-e21dadfd560d/manager/0.log" Jan 29 09:12:48 crc kubenswrapper[5017]: I0129 09:12:48.980152 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6_3c0fb80a-a7e7-4978-8840-4307bc2529e3/util/0.log" Jan 29 09:12:49 crc kubenswrapper[5017]: I0129 09:12:49.302968 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6_3c0fb80a-a7e7-4978-8840-4307bc2529e3/pull/0.log" Jan 29 09:12:49 crc kubenswrapper[5017]: I0129 09:12:49.313020 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6_3c0fb80a-a7e7-4978-8840-4307bc2529e3/util/0.log" Jan 29 09:12:49 crc kubenswrapper[5017]: I0129 09:12:49.326628 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6_3c0fb80a-a7e7-4978-8840-4307bc2529e3/pull/0.log" Jan 29 09:12:49 crc kubenswrapper[5017]: I0129 09:12:49.549639 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6_3c0fb80a-a7e7-4978-8840-4307bc2529e3/pull/0.log" Jan 29 09:12:49 crc kubenswrapper[5017]: I0129 09:12:49.560353 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6_3c0fb80a-a7e7-4978-8840-4307bc2529e3/util/0.log" Jan 29 09:12:49 crc kubenswrapper[5017]: I0129 09:12:49.572399 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924ds7cc6_3c0fb80a-a7e7-4978-8840-4307bc2529e3/extract/0.log" Jan 29 09:12:49 crc kubenswrapper[5017]: I0129 09:12:49.861678 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-4vthv_bda8f50d-d263-450b-922d-9e9da95811b3/manager/0.log" Jan 29 09:12:50 crc kubenswrapper[5017]: I0129 09:12:50.139189 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-7whnz_2aa64d1e-6f8d-4c60-a26b-12ae9595051b/manager/0.log" Jan 29 09:12:50 crc kubenswrapper[5017]: I0129 09:12:50.214545 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-7h92b_6b1e3dc5-6234-4b08-a023-459b6ef45d8a/manager/0.log" Jan 29 09:12:50 crc kubenswrapper[5017]: I0129 09:12:50.295680 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-7zkj8_a348ad8b-f3a0-4639-9839-2bb062e77e29/manager/0.log" Jan 29 09:12:50 crc kubenswrapper[5017]: I0129 09:12:50.669423 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-whkgr_f4577d7f-77c1-41dc-a6dc-37a8f967edd5/manager/0.log" Jan 29 09:12:51 crc kubenswrapper[5017]: I0129 09:12:51.129602 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-bxzj9_4d8182ea-62eb-455e-b34c-e5028514c4e1/manager/0.log" Jan 29 09:12:51 crc kubenswrapper[5017]: I0129 09:12:51.226162 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-67c59_0c9c357e-634d-49c9-84bc-642deb32fa88/manager/0.log" Jan 29 09:12:51 crc kubenswrapper[5017]: I0129 09:12:51.317484 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:12:51 crc kubenswrapper[5017]: E0129 09:12:51.317831 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:12:51 crc kubenswrapper[5017]: I0129 09:12:51.607908 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-5ckck_f8aa8837-37c8-4461-bd3c-e2aae6e5dfab/manager/0.log" Jan 29 09:12:51 crc kubenswrapper[5017]: I0129 09:12:51.824607 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-hlh7p_5aa1136e-d199-49c3-9bc3-5cbdaa19d552/manager/0.log" Jan 29 09:12:52 crc kubenswrapper[5017]: I0129 09:12:52.005154 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-sd7m9_b51d682b-635c-44de-8d9e-945127aaeb63/manager/0.log" Jan 29 09:12:52 crc kubenswrapper[5017]: I0129 09:12:52.312828 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-l6gbj_a77928ad-eb54-45fc-a53e-b3f22cb62d53/manager/0.log" Jan 29 09:12:52 crc kubenswrapper[5017]: I0129 09:12:52.967588 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-d5d667db8-qqk7h_7dd82efb-017d-4e70-86b1-f25e7026646a/manager/0.log" Jan 29 09:12:52 crc kubenswrapper[5017]: I0129 09:12:52.977609 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-bpdqt_863f4dee-1272-4cb9-8ced-84a5114d64af/manager/0.log" Jan 29 09:12:53 crc kubenswrapper[5017]: I0129 09:12:53.409765 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5c4cd4c8c8-qggc5_42582d12-6d4b-43cc-b843-7c425d6dbdf3/operator/0.log" Jan 29 09:12:53 crc kubenswrapper[5017]: I0129 09:12:53.835630 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-kjrxb_5558b938-90cc-4177-ae13-4c8d6f65ea6d/manager/0.log" Jan 29 09:12:53 crc kubenswrapper[5017]: I0129 09:12:53.839658 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wwps8_dd0dee81-c421-43b8-8137-b56ad147be6a/registry-server/0.log" Jan 29 09:12:53 crc kubenswrapper[5017]: I0129 09:12:53.952449 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-szcqv_283799a2-6b66-4255-8864-3a561dd04e89/manager/0.log" Jan 29 09:12:54 crc kubenswrapper[5017]: I0129 09:12:54.193203 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-r8p8w_e8379d4d-67d5-42f0-8c28-f0d617723886/operator/0.log" Jan 29 09:12:54 crc kubenswrapper[5017]: I0129 09:12:54.235471 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-l6mcx_b9a03454-a7c9-47c6-9eda-6cf83e3140d7/manager/0.log" Jan 29 09:12:54 crc kubenswrapper[5017]: I0129 09:12:54.566226 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-f7d98_d7bc466f-b955-4c7a-a5dc-806e4a89b432/manager/0.log" Jan 29 09:12:54 crc kubenswrapper[5017]: I0129 09:12:54.745387 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-lrhwt_ce3c279f-3bc1-4e6a-a0f5-cb46e55ede8c/manager/0.log" Jan 29 09:12:55 crc kubenswrapper[5017]: I0129 09:12:55.478700 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-wtmjj_594ce113-eeb0-4eb4-9254-4f1695ced6c7/manager/0.log" Jan 29 09:12:55 crc kubenswrapper[5017]: I0129 09:12:55.944985 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7b54f464f6-h4kbw_facf0821-eb7d-4510-bcb7-69387e467df9/manager/0.log" Jan 29 09:13:05 crc kubenswrapper[5017]: I0129 09:13:05.316349 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:13:06 crc kubenswrapper[5017]: I0129 09:13:06.901540 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"4618f8398420d06edec9c6f42cc7aa8b48fdc4be826626fe26241d4ca4840055"} Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.854635 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jg8ll"] Jan 29 09:13:07 crc kubenswrapper[5017]: E0129 09:13:07.856277 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerName="extract-content" Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.856309 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerName="extract-content" Jan 29 09:13:07 crc kubenswrapper[5017]: E0129 09:13:07.856325 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerName="extract-utilities" Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.856334 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerName="extract-utilities" Jan 29 09:13:07 crc kubenswrapper[5017]: E0129 09:13:07.856374 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerName="registry-server" Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.856383 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerName="registry-server" Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.856729 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c7cf6d-0a49-4919-bae2-0f077d71a479" containerName="registry-server" Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.859257 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.898894 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jg8ll"] Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.934403 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-utilities\") pod \"redhat-operators-jg8ll\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.934587 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jscsw\" (UniqueName: \"kubernetes.io/projected/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-kube-api-access-jscsw\") pod \"redhat-operators-jg8ll\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:07 crc kubenswrapper[5017]: I0129 09:13:07.934748 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-catalog-content\") pod \"redhat-operators-jg8ll\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:08 crc kubenswrapper[5017]: I0129 09:13:08.037513 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-utilities\") pod \"redhat-operators-jg8ll\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:08 crc kubenswrapper[5017]: I0129 09:13:08.037634 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jscsw\" (UniqueName: \"kubernetes.io/projected/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-kube-api-access-jscsw\") pod \"redhat-operators-jg8ll\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:08 crc kubenswrapper[5017]: I0129 09:13:08.037712 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-catalog-content\") pod \"redhat-operators-jg8ll\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:08 crc kubenswrapper[5017]: I0129 09:13:08.038276 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-utilities\") pod \"redhat-operators-jg8ll\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:08 crc kubenswrapper[5017]: I0129 09:13:08.038460 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-catalog-content\") pod \"redhat-operators-jg8ll\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:08 crc kubenswrapper[5017]: I0129 09:13:08.301289 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jscsw\" (UniqueName: \"kubernetes.io/projected/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-kube-api-access-jscsw\") pod \"redhat-operators-jg8ll\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:08 crc kubenswrapper[5017]: I0129 09:13:08.500581 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:09 crc kubenswrapper[5017]: I0129 09:13:09.099015 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jg8ll"] Jan 29 09:13:09 crc kubenswrapper[5017]: E0129 09:13:09.572792 5017 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffa42744_219d_4e46_bb48_ca14dd9c2c4f.slice/crio-conmon-2a39631ec285b05b60848b86e4d9ac52e50b2c98244749199d4524aa9465c125.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:13:09 crc kubenswrapper[5017]: I0129 09:13:09.964645 5017 generic.go:334] "Generic (PLEG): container finished" podID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerID="2a39631ec285b05b60848b86e4d9ac52e50b2c98244749199d4524aa9465c125" exitCode=0 Jan 29 09:13:09 crc kubenswrapper[5017]: I0129 09:13:09.964729 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg8ll" event={"ID":"ffa42744-219d-4e46-bb48-ca14dd9c2c4f","Type":"ContainerDied","Data":"2a39631ec285b05b60848b86e4d9ac52e50b2c98244749199d4524aa9465c125"} Jan 29 09:13:09 crc kubenswrapper[5017]: I0129 09:13:09.965248 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg8ll" event={"ID":"ffa42744-219d-4e46-bb48-ca14dd9c2c4f","Type":"ContainerStarted","Data":"7a299098233001e7a7d4102d0ead2a8a29228ea3dfab4db9b973df0a13519c7a"} Jan 29 09:13:11 crc kubenswrapper[5017]: I0129 09:13:11.989725 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg8ll" event={"ID":"ffa42744-219d-4e46-bb48-ca14dd9c2c4f","Type":"ContainerStarted","Data":"62567757d1bda340b50cc34a9a1aa36815f3d401904e55c8b27f0779c526623f"} Jan 29 09:13:18 crc kubenswrapper[5017]: I0129 09:13:18.057824 5017 generic.go:334] "Generic (PLEG): container finished" podID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerID="62567757d1bda340b50cc34a9a1aa36815f3d401904e55c8b27f0779c526623f" exitCode=0 Jan 29 09:13:18 crc kubenswrapper[5017]: I0129 09:13:18.057890 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg8ll" event={"ID":"ffa42744-219d-4e46-bb48-ca14dd9c2c4f","Type":"ContainerDied","Data":"62567757d1bda340b50cc34a9a1aa36815f3d401904e55c8b27f0779c526623f"} Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.097655 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg8ll" event={"ID":"ffa42744-219d-4e46-bb48-ca14dd9c2c4f","Type":"ContainerStarted","Data":"71e3b0fa5014802a094d0763a3885631ef4a85cd6f01b4d6df51d63194aaeb42"} Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.129109 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jg8ll" podStartSLOduration=4.628050519 podStartE2EDuration="13.128946769s" podCreationTimestamp="2026-01-29 09:13:07 +0000 UTC" firstStartedPulling="2026-01-29 09:13:09.967572658 +0000 UTC m=+9476.342020268" lastFinishedPulling="2026-01-29 09:13:18.468468908 +0000 UTC m=+9484.842916518" observedRunningTime="2026-01-29 09:13:20.123438534 +0000 UTC m=+9486.497886144" watchObservedRunningTime="2026-01-29 09:13:20.128946769 +0000 UTC m=+9486.503394379" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.358812 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g4z2q"] Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.361858 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.381408 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4z2q"] Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.491316 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9pc4\" (UniqueName: \"kubernetes.io/projected/fe9d57b9-a8ba-40a7-81a2-6688050c176d-kube-api-access-x9pc4\") pod \"redhat-marketplace-g4z2q\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.491551 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-utilities\") pod \"redhat-marketplace-g4z2q\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.492217 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-catalog-content\") pod \"redhat-marketplace-g4z2q\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.595204 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9pc4\" (UniqueName: \"kubernetes.io/projected/fe9d57b9-a8ba-40a7-81a2-6688050c176d-kube-api-access-x9pc4\") pod \"redhat-marketplace-g4z2q\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.595414 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-utilities\") pod \"redhat-marketplace-g4z2q\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.595457 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-catalog-content\") pod \"redhat-marketplace-g4z2q\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.596222 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-catalog-content\") pod \"redhat-marketplace-g4z2q\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.596394 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-utilities\") pod \"redhat-marketplace-g4z2q\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.638542 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9pc4\" (UniqueName: \"kubernetes.io/projected/fe9d57b9-a8ba-40a7-81a2-6688050c176d-kube-api-access-x9pc4\") pod \"redhat-marketplace-g4z2q\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:20 crc kubenswrapper[5017]: I0129 09:13:20.701364 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:21 crc kubenswrapper[5017]: I0129 09:13:21.346366 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4z2q"] Jan 29 09:13:21 crc kubenswrapper[5017]: W0129 09:13:21.722306 5017 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe9d57b9_a8ba_40a7_81a2_6688050c176d.slice/crio-748014e484b1efda00cc44403161212918a46f01a0c2609ab61b5be23e20f4e3 WatchSource:0}: Error finding container 748014e484b1efda00cc44403161212918a46f01a0c2609ab61b5be23e20f4e3: Status 404 returned error can't find the container with id 748014e484b1efda00cc44403161212918a46f01a0c2609ab61b5be23e20f4e3 Jan 29 09:13:22 crc kubenswrapper[5017]: I0129 09:13:22.125893 5017 generic.go:334] "Generic (PLEG): container finished" podID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerID="736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584" exitCode=0 Jan 29 09:13:22 crc kubenswrapper[5017]: I0129 09:13:22.126055 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4z2q" event={"ID":"fe9d57b9-a8ba-40a7-81a2-6688050c176d","Type":"ContainerDied","Data":"736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584"} Jan 29 09:13:22 crc kubenswrapper[5017]: I0129 09:13:22.126385 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4z2q" event={"ID":"fe9d57b9-a8ba-40a7-81a2-6688050c176d","Type":"ContainerStarted","Data":"748014e484b1efda00cc44403161212918a46f01a0c2609ab61b5be23e20f4e3"} Jan 29 09:13:23 crc kubenswrapper[5017]: I0129 09:13:23.597445 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kvgh9_5ab0da1e-4133-488b-9472-83bde1f3bd25/control-plane-machine-set-operator/0.log" Jan 29 09:13:23 crc kubenswrapper[5017]: I0129 09:13:23.811219 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cbp4z_250918e3-cdc9-40cb-b390-6dbb5afe9d1f/machine-api-operator/0.log" Jan 29 09:13:23 crc kubenswrapper[5017]: I0129 09:13:23.882144 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cbp4z_250918e3-cdc9-40cb-b390-6dbb5afe9d1f/kube-rbac-proxy/0.log" Jan 29 09:13:24 crc kubenswrapper[5017]: I0129 09:13:24.152313 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4z2q" event={"ID":"fe9d57b9-a8ba-40a7-81a2-6688050c176d","Type":"ContainerStarted","Data":"4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3"} Jan 29 09:13:25 crc kubenswrapper[5017]: I0129 09:13:25.165877 5017 generic.go:334] "Generic (PLEG): container finished" podID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerID="4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3" exitCode=0 Jan 29 09:13:25 crc kubenswrapper[5017]: I0129 09:13:25.165971 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4z2q" event={"ID":"fe9d57b9-a8ba-40a7-81a2-6688050c176d","Type":"ContainerDied","Data":"4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3"} Jan 29 09:13:27 crc kubenswrapper[5017]: I0129 09:13:27.193454 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4z2q" event={"ID":"fe9d57b9-a8ba-40a7-81a2-6688050c176d","Type":"ContainerStarted","Data":"f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22"} Jan 29 09:13:27 crc kubenswrapper[5017]: I0129 09:13:27.233017 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g4z2q" podStartSLOduration=3.013789591 podStartE2EDuration="7.23298908s" podCreationTimestamp="2026-01-29 09:13:20 +0000 UTC" firstStartedPulling="2026-01-29 09:13:22.128515451 +0000 UTC m=+9488.502963071" lastFinishedPulling="2026-01-29 09:13:26.34771495 +0000 UTC m=+9492.722162560" observedRunningTime="2026-01-29 09:13:27.220791581 +0000 UTC m=+9493.595239211" watchObservedRunningTime="2026-01-29 09:13:27.23298908 +0000 UTC m=+9493.607436690" Jan 29 09:13:28 crc kubenswrapper[5017]: I0129 09:13:28.502750 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:28 crc kubenswrapper[5017]: I0129 09:13:28.503239 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:28 crc kubenswrapper[5017]: I0129 09:13:28.561575 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:29 crc kubenswrapper[5017]: I0129 09:13:29.275109 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:30 crc kubenswrapper[5017]: I0129 09:13:30.702578 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:30 crc kubenswrapper[5017]: I0129 09:13:30.704218 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.152154 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-njc9w"] Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.155865 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.166796 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njc9w"] Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.287572 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsw6l\" (UniqueName: \"kubernetes.io/projected/838717a7-053f-4b5d-aa9b-47d5351afb51-kube-api-access-bsw6l\") pod \"certified-operators-njc9w\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.287649 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-utilities\") pod \"certified-operators-njc9w\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.287683 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-catalog-content\") pod \"certified-operators-njc9w\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.390191 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-utilities\") pod \"certified-operators-njc9w\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.390254 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-catalog-content\") pod \"certified-operators-njc9w\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.390589 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsw6l\" (UniqueName: \"kubernetes.io/projected/838717a7-053f-4b5d-aa9b-47d5351afb51-kube-api-access-bsw6l\") pod \"certified-operators-njc9w\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.391041 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-utilities\") pod \"certified-operators-njc9w\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.391185 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-catalog-content\") pod \"certified-operators-njc9w\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.759510 5017 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g4z2q" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerName="registry-server" probeResult="failure" output=< Jan 29 09:13:31 crc kubenswrapper[5017]: timeout: failed to connect service ":50051" within 1s Jan 29 09:13:31 crc kubenswrapper[5017]: > Jan 29 09:13:31 crc kubenswrapper[5017]: I0129 09:13:31.905296 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsw6l\" (UniqueName: \"kubernetes.io/projected/838717a7-053f-4b5d-aa9b-47d5351afb51-kube-api-access-bsw6l\") pod \"certified-operators-njc9w\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:32 crc kubenswrapper[5017]: I0129 09:13:32.079433 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:32 crc kubenswrapper[5017]: I0129 09:13:32.588428 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njc9w"] Jan 29 09:13:33 crc kubenswrapper[5017]: I0129 09:13:33.294166 5017 generic.go:334] "Generic (PLEG): container finished" podID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerID="a746d6fcadc61b455dfd8097e9347c5a8c503e6adbd5e87b99a5deed02c1c3e0" exitCode=0 Jan 29 09:13:33 crc kubenswrapper[5017]: I0129 09:13:33.294406 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njc9w" event={"ID":"838717a7-053f-4b5d-aa9b-47d5351afb51","Type":"ContainerDied","Data":"a746d6fcadc61b455dfd8097e9347c5a8c503e6adbd5e87b99a5deed02c1c3e0"} Jan 29 09:13:33 crc kubenswrapper[5017]: I0129 09:13:33.294623 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njc9w" event={"ID":"838717a7-053f-4b5d-aa9b-47d5351afb51","Type":"ContainerStarted","Data":"551abc3bb643aef511894cbd2719f7e02475e2b1cfecc8f684e5e9559133c7fe"} Jan 29 09:13:35 crc kubenswrapper[5017]: I0129 09:13:35.326877 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njc9w" event={"ID":"838717a7-053f-4b5d-aa9b-47d5351afb51","Type":"ContainerStarted","Data":"beddd183d477cca4dd6f79868b87267087ce37d0fa5a8483d7ca1e34de504e14"} Jan 29 09:13:35 crc kubenswrapper[5017]: I0129 09:13:35.994448 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jg8ll"] Jan 29 09:13:35 crc kubenswrapper[5017]: I0129 09:13:35.995286 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jg8ll" podUID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerName="registry-server" containerID="cri-o://71e3b0fa5014802a094d0763a3885631ef4a85cd6f01b4d6df51d63194aaeb42" gracePeriod=2 Jan 29 09:13:36 crc kubenswrapper[5017]: I0129 09:13:36.346430 5017 generic.go:334] "Generic (PLEG): container finished" podID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerID="71e3b0fa5014802a094d0763a3885631ef4a85cd6f01b4d6df51d63194aaeb42" exitCode=0 Jan 29 09:13:36 crc kubenswrapper[5017]: I0129 09:13:36.346483 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg8ll" event={"ID":"ffa42744-219d-4e46-bb48-ca14dd9c2c4f","Type":"ContainerDied","Data":"71e3b0fa5014802a094d0763a3885631ef4a85cd6f01b4d6df51d63194aaeb42"} Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.116307 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.147369 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jscsw\" (UniqueName: \"kubernetes.io/projected/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-kube-api-access-jscsw\") pod \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.147519 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-catalog-content\") pod \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.147693 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-utilities\") pod \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\" (UID: \"ffa42744-219d-4e46-bb48-ca14dd9c2c4f\") " Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.148897 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-utilities" (OuterVolumeSpecName: "utilities") pod "ffa42744-219d-4e46-bb48-ca14dd9c2c4f" (UID: "ffa42744-219d-4e46-bb48-ca14dd9c2c4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.156977 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-kube-api-access-jscsw" (OuterVolumeSpecName: "kube-api-access-jscsw") pod "ffa42744-219d-4e46-bb48-ca14dd9c2c4f" (UID: "ffa42744-219d-4e46-bb48-ca14dd9c2c4f"). InnerVolumeSpecName "kube-api-access-jscsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.252014 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jscsw\" (UniqueName: \"kubernetes.io/projected/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-kube-api-access-jscsw\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.252121 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.307594 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffa42744-219d-4e46-bb48-ca14dd9c2c4f" (UID: "ffa42744-219d-4e46-bb48-ca14dd9c2c4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.354386 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa42744-219d-4e46-bb48-ca14dd9c2c4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.363630 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg8ll" event={"ID":"ffa42744-219d-4e46-bb48-ca14dd9c2c4f","Type":"ContainerDied","Data":"7a299098233001e7a7d4102d0ead2a8a29228ea3dfab4db9b973df0a13519c7a"} Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.363703 5017 scope.go:117] "RemoveContainer" containerID="71e3b0fa5014802a094d0763a3885631ef4a85cd6f01b4d6df51d63194aaeb42" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.363733 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg8ll" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.405664 5017 scope.go:117] "RemoveContainer" containerID="62567757d1bda340b50cc34a9a1aa36815f3d401904e55c8b27f0779c526623f" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.422101 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jg8ll"] Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.440266 5017 scope.go:117] "RemoveContainer" containerID="2a39631ec285b05b60848b86e4d9ac52e50b2c98244749199d4524aa9465c125" Jan 29 09:13:37 crc kubenswrapper[5017]: I0129 09:13:37.440409 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jg8ll"] Jan 29 09:13:38 crc kubenswrapper[5017]: I0129 09:13:38.336042 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" path="/var/lib/kubelet/pods/ffa42744-219d-4e46-bb48-ca14dd9c2c4f/volumes" Jan 29 09:13:38 crc kubenswrapper[5017]: I0129 09:13:38.380233 5017 generic.go:334] "Generic (PLEG): container finished" podID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerID="beddd183d477cca4dd6f79868b87267087ce37d0fa5a8483d7ca1e34de504e14" exitCode=0 Jan 29 09:13:38 crc kubenswrapper[5017]: I0129 09:13:38.381134 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njc9w" event={"ID":"838717a7-053f-4b5d-aa9b-47d5351afb51","Type":"ContainerDied","Data":"beddd183d477cca4dd6f79868b87267087ce37d0fa5a8483d7ca1e34de504e14"} Jan 29 09:13:39 crc kubenswrapper[5017]: I0129 09:13:39.399086 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njc9w" event={"ID":"838717a7-053f-4b5d-aa9b-47d5351afb51","Type":"ContainerStarted","Data":"de996ae22508e8f0e117ebcf7a20c41f29ff799e146552b7ef04c609ed2d69dd"} Jan 29 09:13:39 crc kubenswrapper[5017]: I0129 09:13:39.430399 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-njc9w" podStartSLOduration=2.811634083 podStartE2EDuration="8.430373367s" podCreationTimestamp="2026-01-29 09:13:31 +0000 UTC" firstStartedPulling="2026-01-29 09:13:33.2986762 +0000 UTC m=+9499.673123810" lastFinishedPulling="2026-01-29 09:13:38.917415484 +0000 UTC m=+9505.291863094" observedRunningTime="2026-01-29 09:13:39.426344348 +0000 UTC m=+9505.800791958" watchObservedRunningTime="2026-01-29 09:13:39.430373367 +0000 UTC m=+9505.804820977" Jan 29 09:13:40 crc kubenswrapper[5017]: I0129 09:13:40.769087 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:40 crc kubenswrapper[5017]: I0129 09:13:40.821376 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-kbdd8_d340ec3a-8018-4c17-864a-4121ef63d989/cert-manager-controller/0.log" Jan 29 09:13:40 crc kubenswrapper[5017]: I0129 09:13:40.838401 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:41 crc kubenswrapper[5017]: I0129 09:13:41.090322 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-kt86q_a5489a52-a692-4be5-ad55-f4e3607180e9/cert-manager-cainjector/0.log" Jan 29 09:13:41 crc kubenswrapper[5017]: I0129 09:13:41.210183 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-ttkcb_a50c399e-cf7e-4906-82b7-44e8925508c1/cert-manager-webhook/0.log" Jan 29 09:13:42 crc kubenswrapper[5017]: I0129 09:13:42.080650 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:42 crc kubenswrapper[5017]: I0129 09:13:42.081144 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:42 crc kubenswrapper[5017]: I0129 09:13:42.136759 5017 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:48 crc kubenswrapper[5017]: I0129 09:13:48.743214 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4z2q"] Jan 29 09:13:48 crc kubenswrapper[5017]: I0129 09:13:48.744455 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g4z2q" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerName="registry-server" containerID="cri-o://f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22" gracePeriod=2 Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.319928 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.385309 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9pc4\" (UniqueName: \"kubernetes.io/projected/fe9d57b9-a8ba-40a7-81a2-6688050c176d-kube-api-access-x9pc4\") pod \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.385455 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-utilities\") pod \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.385535 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-catalog-content\") pod \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\" (UID: \"fe9d57b9-a8ba-40a7-81a2-6688050c176d\") " Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.386938 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-utilities" (OuterVolumeSpecName: "utilities") pod "fe9d57b9-a8ba-40a7-81a2-6688050c176d" (UID: "fe9d57b9-a8ba-40a7-81a2-6688050c176d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.394410 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9d57b9-a8ba-40a7-81a2-6688050c176d-kube-api-access-x9pc4" (OuterVolumeSpecName: "kube-api-access-x9pc4") pod "fe9d57b9-a8ba-40a7-81a2-6688050c176d" (UID: "fe9d57b9-a8ba-40a7-81a2-6688050c176d"). InnerVolumeSpecName "kube-api-access-x9pc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.413837 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe9d57b9-a8ba-40a7-81a2-6688050c176d" (UID: "fe9d57b9-a8ba-40a7-81a2-6688050c176d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.489439 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9pc4\" (UniqueName: \"kubernetes.io/projected/fe9d57b9-a8ba-40a7-81a2-6688050c176d-kube-api-access-x9pc4\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.489896 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.490006 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe9d57b9-a8ba-40a7-81a2-6688050c176d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.513733 5017 generic.go:334] "Generic (PLEG): container finished" podID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerID="f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22" exitCode=0 Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.513793 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4z2q" event={"ID":"fe9d57b9-a8ba-40a7-81a2-6688050c176d","Type":"ContainerDied","Data":"f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22"} Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.513832 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4z2q" event={"ID":"fe9d57b9-a8ba-40a7-81a2-6688050c176d","Type":"ContainerDied","Data":"748014e484b1efda00cc44403161212918a46f01a0c2609ab61b5be23e20f4e3"} Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.513830 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4z2q" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.513918 5017 scope.go:117] "RemoveContainer" containerID="f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.552099 5017 scope.go:117] "RemoveContainer" containerID="4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.583337 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4z2q"] Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.595327 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4z2q"] Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.596089 5017 scope.go:117] "RemoveContainer" containerID="736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.649897 5017 scope.go:117] "RemoveContainer" containerID="f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22" Jan 29 09:13:49 crc kubenswrapper[5017]: E0129 09:13:49.650715 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22\": container with ID starting with f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22 not found: ID does not exist" containerID="f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.650794 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22"} err="failed to get container status \"f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22\": rpc error: code = NotFound desc = could not find container \"f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22\": container with ID starting with f7006588173936b5135b6dbc47779f7a9e0dff433ef8a72237b9bf5dd53a9e22 not found: ID does not exist" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.650827 5017 scope.go:117] "RemoveContainer" containerID="4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3" Jan 29 09:13:49 crc kubenswrapper[5017]: E0129 09:13:49.651303 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3\": container with ID starting with 4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3 not found: ID does not exist" containerID="4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.651405 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3"} err="failed to get container status \"4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3\": rpc error: code = NotFound desc = could not find container \"4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3\": container with ID starting with 4c885d649301f697544d4cfd2b5f2a072b49f1fe40df3a21c17deac3cf62fff3 not found: ID does not exist" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.651456 5017 scope.go:117] "RemoveContainer" containerID="736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584" Jan 29 09:13:49 crc kubenswrapper[5017]: E0129 09:13:49.651922 5017 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584\": container with ID starting with 736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584 not found: ID does not exist" containerID="736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584" Jan 29 09:13:49 crc kubenswrapper[5017]: I0129 09:13:49.651968 5017 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584"} err="failed to get container status \"736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584\": rpc error: code = NotFound desc = could not find container \"736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584\": container with ID starting with 736fdfcd381ed74f04c6111d6921f33af7b0e444dfef5e4a0dc40d5f52d81584 not found: ID does not exist" Jan 29 09:13:50 crc kubenswrapper[5017]: I0129 09:13:50.330169 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" path="/var/lib/kubelet/pods/fe9d57b9-a8ba-40a7-81a2-6688050c176d/volumes" Jan 29 09:13:52 crc kubenswrapper[5017]: I0129 09:13:52.133622 5017 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:53 crc kubenswrapper[5017]: I0129 09:13:53.344071 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njc9w"] Jan 29 09:13:53 crc kubenswrapper[5017]: I0129 09:13:53.344778 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-njc9w" podUID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerName="registry-server" containerID="cri-o://de996ae22508e8f0e117ebcf7a20c41f29ff799e146552b7ef04c609ed2d69dd" gracePeriod=2 Jan 29 09:13:53 crc kubenswrapper[5017]: I0129 09:13:53.584977 5017 generic.go:334] "Generic (PLEG): container finished" podID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerID="de996ae22508e8f0e117ebcf7a20c41f29ff799e146552b7ef04c609ed2d69dd" exitCode=0 Jan 29 09:13:53 crc kubenswrapper[5017]: I0129 09:13:53.585255 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njc9w" event={"ID":"838717a7-053f-4b5d-aa9b-47d5351afb51","Type":"ContainerDied","Data":"de996ae22508e8f0e117ebcf7a20c41f29ff799e146552b7ef04c609ed2d69dd"} Jan 29 09:13:53 crc kubenswrapper[5017]: I0129 09:13:53.878877 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.004464 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-utilities\") pod \"838717a7-053f-4b5d-aa9b-47d5351afb51\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.004546 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-catalog-content\") pod \"838717a7-053f-4b5d-aa9b-47d5351afb51\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.004682 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsw6l\" (UniqueName: \"kubernetes.io/projected/838717a7-053f-4b5d-aa9b-47d5351afb51-kube-api-access-bsw6l\") pod \"838717a7-053f-4b5d-aa9b-47d5351afb51\" (UID: \"838717a7-053f-4b5d-aa9b-47d5351afb51\") " Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.007297 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-utilities" (OuterVolumeSpecName: "utilities") pod "838717a7-053f-4b5d-aa9b-47d5351afb51" (UID: "838717a7-053f-4b5d-aa9b-47d5351afb51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.013381 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838717a7-053f-4b5d-aa9b-47d5351afb51-kube-api-access-bsw6l" (OuterVolumeSpecName: "kube-api-access-bsw6l") pod "838717a7-053f-4b5d-aa9b-47d5351afb51" (UID: "838717a7-053f-4b5d-aa9b-47d5351afb51"). InnerVolumeSpecName "kube-api-access-bsw6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.061564 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "838717a7-053f-4b5d-aa9b-47d5351afb51" (UID: "838717a7-053f-4b5d-aa9b-47d5351afb51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.107344 5017 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.107811 5017 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838717a7-053f-4b5d-aa9b-47d5351afb51-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.107829 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsw6l\" (UniqueName: \"kubernetes.io/projected/838717a7-053f-4b5d-aa9b-47d5351afb51-kube-api-access-bsw6l\") on node \"crc\" DevicePath \"\"" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.605604 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njc9w" event={"ID":"838717a7-053f-4b5d-aa9b-47d5351afb51","Type":"ContainerDied","Data":"551abc3bb643aef511894cbd2719f7e02475e2b1cfecc8f684e5e9559133c7fe"} Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.605651 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njc9w" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.605684 5017 scope.go:117] "RemoveContainer" containerID="de996ae22508e8f0e117ebcf7a20c41f29ff799e146552b7ef04c609ed2d69dd" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.641804 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njc9w"] Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.645392 5017 scope.go:117] "RemoveContainer" containerID="beddd183d477cca4dd6f79868b87267087ce37d0fa5a8483d7ca1e34de504e14" Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.659065 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-njc9w"] Jan 29 09:13:54 crc kubenswrapper[5017]: I0129 09:13:54.670637 5017 scope.go:117] "RemoveContainer" containerID="a746d6fcadc61b455dfd8097e9347c5a8c503e6adbd5e87b99a5deed02c1c3e0" Jan 29 09:13:55 crc kubenswrapper[5017]: I0129 09:13:55.366377 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-c979l_77f8b370-cba3-4b23-956d-85e2eac24634/nmstate-console-plugin/0.log" Jan 29 09:13:55 crc kubenswrapper[5017]: I0129 09:13:55.605037 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-gpxgz_59ffce2a-c49d-42c3-b665-c2cab504e523/kube-rbac-proxy/0.log" Jan 29 09:13:55 crc kubenswrapper[5017]: I0129 09:13:55.615721 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kwtrf_bfe492e6-837a-4318-a908-125c9cc736d0/nmstate-handler/0.log" Jan 29 09:13:55 crc kubenswrapper[5017]: I0129 09:13:55.819438 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-gpxgz_59ffce2a-c49d-42c3-b665-c2cab504e523/nmstate-metrics/0.log" Jan 29 09:13:55 crc kubenswrapper[5017]: I0129 09:13:55.828151 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-fnwvc_f4986274-c67e-4d15-a613-ed6e440526e5/nmstate-operator/0.log" Jan 29 09:13:56 crc kubenswrapper[5017]: I0129 09:13:56.006776 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-ljkjc_8da9c51f-d7bf-499a-a29b-348743eb72ad/nmstate-webhook/0.log" Jan 29 09:13:56 crc kubenswrapper[5017]: I0129 09:13:56.329319 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838717a7-053f-4b5d-aa9b-47d5351afb51" path="/var/lib/kubelet/pods/838717a7-053f-4b5d-aa9b-47d5351afb51/volumes" Jan 29 09:14:11 crc kubenswrapper[5017]: I0129 09:14:11.922110 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-84bts_dced126a-1d49-4fe1-a610-32145372c814/prometheus-operator/0.log" Jan 29 09:14:12 crc kubenswrapper[5017]: I0129 09:14:12.185861 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f7867b468-8qncr_b5cd374b-6395-40e3-80fb-2ce7f3f9c001/prometheus-operator-admission-webhook/0.log" Jan 29 09:14:12 crc kubenswrapper[5017]: I0129 09:14:12.217568 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f7867b468-jcftm_3dc25106-c3d9-46c1-9d93-3407ca7dedbd/prometheus-operator-admission-webhook/0.log" Jan 29 09:14:13 crc kubenswrapper[5017]: I0129 09:14:13.138911 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-rvsnq_cecf03f2-56cf-41cd-a5e5-0a99d4c0784f/perses-operator/0.log" Jan 29 09:14:13 crc kubenswrapper[5017]: I0129 09:14:13.218553 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-drf5r_c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16/operator/0.log" Jan 29 09:14:27 crc kubenswrapper[5017]: I0129 09:14:27.284500 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8vxt7_632ce719-ce24-4b7c-855b-1b348732dc19/kube-rbac-proxy/0.log" Jan 29 09:14:27 crc kubenswrapper[5017]: I0129 09:14:27.571311 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-frr-files/0.log" Jan 29 09:14:27 crc kubenswrapper[5017]: I0129 09:14:27.829191 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8vxt7_632ce719-ce24-4b7c-855b-1b348732dc19/controller/0.log" Jan 29 09:14:27 crc kubenswrapper[5017]: I0129 09:14:27.842483 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-frr-files/0.log" Jan 29 09:14:27 crc kubenswrapper[5017]: I0129 09:14:27.895852 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-reloader/0.log" Jan 29 09:14:27 crc kubenswrapper[5017]: I0129 09:14:27.903851 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-metrics/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.029411 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-reloader/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.235236 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-frr-files/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.284858 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-metrics/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.287371 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-metrics/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.289272 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-reloader/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.460561 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-metrics/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.470473 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-reloader/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.501649 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/cp-frr-files/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.510610 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/controller/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.677093 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/frr-metrics/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.753930 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/kube-rbac-proxy/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.818417 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/kube-rbac-proxy-frr/0.log" Jan 29 09:14:28 crc kubenswrapper[5017]: I0129 09:14:28.980254 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/reloader/0.log" Jan 29 09:14:29 crc kubenswrapper[5017]: I0129 09:14:29.085624 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-cpp58_4b622c7b-c02d-4238-825f-daa2fd5879ca/frr-k8s-webhook-server/0.log" Jan 29 09:14:29 crc kubenswrapper[5017]: I0129 09:14:29.197522 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67799b9d-txr78_a0243db3-515f-470c-93fe-a2d3e043962e/manager/0.log" Jan 29 09:14:29 crc kubenswrapper[5017]: I0129 09:14:29.531162 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-57b844687f-2cx2w_0714ab86-4883-4185-8f73-167cc7aa1bf0/webhook-server/0.log" Jan 29 09:14:29 crc kubenswrapper[5017]: I0129 09:14:29.585308 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-48rj7_1a51c751-7496-418f-ad06-10d8db26b0f6/kube-rbac-proxy/0.log" Jan 29 09:14:31 crc kubenswrapper[5017]: I0129 09:14:31.331521 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-48rj7_1a51c751-7496-418f-ad06-10d8db26b0f6/speaker/0.log" Jan 29 09:14:32 crc kubenswrapper[5017]: I0129 09:14:32.596326 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cvz64_2f35677d-147b-4d27-ac32-ab82b1ec29db/frr/0.log" Jan 29 09:14:45 crc kubenswrapper[5017]: I0129 09:14:45.594442 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6_8ab2e317-a111-486e-aff4-2bf131383d02/util/0.log" Jan 29 09:14:45 crc kubenswrapper[5017]: I0129 09:14:45.781654 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6_8ab2e317-a111-486e-aff4-2bf131383d02/util/0.log" Jan 29 09:14:45 crc kubenswrapper[5017]: I0129 09:14:45.808819 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6_8ab2e317-a111-486e-aff4-2bf131383d02/pull/0.log" Jan 29 09:14:45 crc kubenswrapper[5017]: I0129 09:14:45.858115 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6_8ab2e317-a111-486e-aff4-2bf131383d02/pull/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.054510 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6_8ab2e317-a111-486e-aff4-2bf131383d02/pull/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.069373 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6_8ab2e317-a111-486e-aff4-2bf131383d02/util/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.095929 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcx66d6_8ab2e317-a111-486e-aff4-2bf131383d02/extract/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.261462 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd_bed5dfbe-e294-4d8a-b3c6-953287cb9057/util/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.451452 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd_bed5dfbe-e294-4d8a-b3c6-953287cb9057/pull/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.474365 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd_bed5dfbe-e294-4d8a-b3c6-953287cb9057/util/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.532342 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd_bed5dfbe-e294-4d8a-b3c6-953287cb9057/pull/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.673888 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd_bed5dfbe-e294-4d8a-b3c6-953287cb9057/util/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.674620 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd_bed5dfbe-e294-4d8a-b3c6-953287cb9057/extract/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.675096 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l47fd_bed5dfbe-e294-4d8a-b3c6-953287cb9057/pull/0.log" Jan 29 09:14:46 crc kubenswrapper[5017]: I0129 09:14:46.881973 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988_ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4/util/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.096825 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988_ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4/util/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.175563 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988_ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4/pull/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.213162 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988_ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4/pull/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.369934 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988_ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4/util/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.416126 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988_ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4/pull/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.463550 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e59t988_ecf4fcf6-9f93-4a65-a6e0-442cc5cd14b4/extract/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.596929 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p_43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a/util/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.854865 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p_43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a/pull/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.878684 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p_43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a/util/0.log" Jan 29 09:14:47 crc kubenswrapper[5017]: I0129 09:14:47.905242 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p_43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a/pull/0.log" Jan 29 09:14:48 crc kubenswrapper[5017]: I0129 09:14:48.093360 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p_43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a/extract/0.log" Jan 29 09:14:48 crc kubenswrapper[5017]: I0129 09:14:48.094373 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p_43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a/pull/0.log" Jan 29 09:14:48 crc kubenswrapper[5017]: I0129 09:14:48.107287 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wwq4p_43e5f03c-e3bd-4aa9-a2eb-2d2549511c9a/util/0.log" Jan 29 09:14:48 crc kubenswrapper[5017]: I0129 09:14:48.292095 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wm4xz_604dcc3c-6617-4c60-9cf7-c6d75ed77584/extract-utilities/0.log" Jan 29 09:14:48 crc kubenswrapper[5017]: I0129 09:14:48.467801 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wm4xz_604dcc3c-6617-4c60-9cf7-c6d75ed77584/extract-utilities/0.log" Jan 29 09:14:48 crc kubenswrapper[5017]: I0129 09:14:48.497284 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wm4xz_604dcc3c-6617-4c60-9cf7-c6d75ed77584/extract-content/0.log" Jan 29 09:14:48 crc kubenswrapper[5017]: I0129 09:14:48.501210 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wm4xz_604dcc3c-6617-4c60-9cf7-c6d75ed77584/extract-content/0.log" Jan 29 09:14:48 crc kubenswrapper[5017]: I0129 09:14:48.709305 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wm4xz_604dcc3c-6617-4c60-9cf7-c6d75ed77584/extract-utilities/0.log" Jan 29 09:14:48 crc kubenswrapper[5017]: I0129 09:14:48.716421 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wm4xz_604dcc3c-6617-4c60-9cf7-c6d75ed77584/extract-content/0.log" Jan 29 09:14:49 crc kubenswrapper[5017]: I0129 09:14:49.521366 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k52x8_6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d/extract-utilities/0.log" Jan 29 09:14:49 crc kubenswrapper[5017]: I0129 09:14:49.762642 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k52x8_6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d/extract-content/0.log" Jan 29 09:14:49 crc kubenswrapper[5017]: I0129 09:14:49.851179 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k52x8_6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d/extract-utilities/0.log" Jan 29 09:14:49 crc kubenswrapper[5017]: I0129 09:14:49.873710 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k52x8_6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d/extract-content/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.075398 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wm4xz_604dcc3c-6617-4c60-9cf7-c6d75ed77584/registry-server/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.106733 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k52x8_6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d/extract-utilities/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.109939 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k52x8_6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d/extract-content/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.332865 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6hp88_3fc9269a-d09b-426d-988d-05995e1d4014/marketplace-operator/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.404433 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqc2_a60ce73c-bc91-4900-8bd3-4abf463391bc/extract-utilities/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.618413 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqc2_a60ce73c-bc91-4900-8bd3-4abf463391bc/extract-content/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.635808 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqc2_a60ce73c-bc91-4900-8bd3-4abf463391bc/extract-utilities/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.708240 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqc2_a60ce73c-bc91-4900-8bd3-4abf463391bc/extract-content/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.925113 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqc2_a60ce73c-bc91-4900-8bd3-4abf463391bc/extract-utilities/0.log" Jan 29 09:14:50 crc kubenswrapper[5017]: I0129 09:14:50.990450 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqc2_a60ce73c-bc91-4900-8bd3-4abf463391bc/extract-content/0.log" Jan 29 09:14:51 crc kubenswrapper[5017]: I0129 09:14:51.517235 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9vqc2_a60ce73c-bc91-4900-8bd3-4abf463391bc/registry-server/0.log" Jan 29 09:14:51 crc kubenswrapper[5017]: I0129 09:14:51.772367 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k52x8_6edd9c7c-c85c-4d56-9d3b-cff10dd5bb6d/registry-server/0.log" Jan 29 09:14:51 crc kubenswrapper[5017]: I0129 09:14:51.806976 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hk4hv_005e775d-7652-4282-af00-35a890d012a2/extract-utilities/0.log" Jan 29 09:14:51 crc kubenswrapper[5017]: I0129 09:14:51.979759 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hk4hv_005e775d-7652-4282-af00-35a890d012a2/extract-utilities/0.log" Jan 29 09:14:52 crc kubenswrapper[5017]: I0129 09:14:52.021532 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hk4hv_005e775d-7652-4282-af00-35a890d012a2/extract-content/0.log" Jan 29 09:14:52 crc kubenswrapper[5017]: I0129 09:14:52.022866 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hk4hv_005e775d-7652-4282-af00-35a890d012a2/extract-content/0.log" Jan 29 09:14:52 crc kubenswrapper[5017]: I0129 09:14:52.215343 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hk4hv_005e775d-7652-4282-af00-35a890d012a2/extract-utilities/0.log" Jan 29 09:14:52 crc kubenswrapper[5017]: I0129 09:14:52.215431 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hk4hv_005e775d-7652-4282-af00-35a890d012a2/extract-content/0.log" Jan 29 09:14:53 crc kubenswrapper[5017]: I0129 09:14:53.614935 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hk4hv_005e775d-7652-4282-af00-35a890d012a2/registry-server/0.log" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.153207 5017 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w"] Jan 29 09:15:00 crc kubenswrapper[5017]: E0129 09:15:00.156778 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.156916 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[5017]: E0129 09:15:00.157051 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerName="extract-utilities" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.157148 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerName="extract-utilities" Jan 29 09:15:00 crc kubenswrapper[5017]: E0129 09:15:00.157243 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerName="extract-utilities" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.157315 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerName="extract-utilities" Jan 29 09:15:00 crc kubenswrapper[5017]: E0129 09:15:00.157411 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.157488 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[5017]: E0129 09:15:00.157575 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerName="extract-content" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.157662 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerName="extract-content" Jan 29 09:15:00 crc kubenswrapper[5017]: E0129 09:15:00.157754 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.157834 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[5017]: E0129 09:15:00.157908 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerName="extract-utilities" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.157998 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerName="extract-utilities" Jan 29 09:15:00 crc kubenswrapper[5017]: E0129 09:15:00.158089 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerName="extract-content" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.158174 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerName="extract-content" Jan 29 09:15:00 crc kubenswrapper[5017]: E0129 09:15:00.158299 5017 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerName="extract-content" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.158386 5017 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerName="extract-content" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.158871 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="838717a7-053f-4b5d-aa9b-47d5351afb51" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.159004 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe9d57b9-a8ba-40a7-81a2-6688050c176d" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.159141 5017 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa42744-219d-4e46-bb48-ca14dd9c2c4f" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.160389 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.162813 5017 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.163566 5017 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.165054 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w"] Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.275749 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9278abb4-0058-4da4-b4a5-619b549021cb-secret-volume\") pod \"collect-profiles-29494635-tns6w\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.275862 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md7tq\" (UniqueName: \"kubernetes.io/projected/9278abb4-0058-4da4-b4a5-619b549021cb-kube-api-access-md7tq\") pod \"collect-profiles-29494635-tns6w\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.275902 5017 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9278abb4-0058-4da4-b4a5-619b549021cb-config-volume\") pod \"collect-profiles-29494635-tns6w\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.378251 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9278abb4-0058-4da4-b4a5-619b549021cb-config-volume\") pod \"collect-profiles-29494635-tns6w\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.378697 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9278abb4-0058-4da4-b4a5-619b549021cb-secret-volume\") pod \"collect-profiles-29494635-tns6w\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.379455 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9278abb4-0058-4da4-b4a5-619b549021cb-config-volume\") pod \"collect-profiles-29494635-tns6w\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.382841 5017 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md7tq\" (UniqueName: \"kubernetes.io/projected/9278abb4-0058-4da4-b4a5-619b549021cb-kube-api-access-md7tq\") pod \"collect-profiles-29494635-tns6w\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.386800 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9278abb4-0058-4da4-b4a5-619b549021cb-secret-volume\") pod \"collect-profiles-29494635-tns6w\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.403628 5017 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md7tq\" (UniqueName: \"kubernetes.io/projected/9278abb4-0058-4da4-b4a5-619b549021cb-kube-api-access-md7tq\") pod \"collect-profiles-29494635-tns6w\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:00 crc kubenswrapper[5017]: I0129 09:15:00.497160 5017 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:01 crc kubenswrapper[5017]: I0129 09:15:01.034482 5017 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w"] Jan 29 09:15:01 crc kubenswrapper[5017]: I0129 09:15:01.376740 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" event={"ID":"9278abb4-0058-4da4-b4a5-619b549021cb","Type":"ContainerStarted","Data":"3348f6dc5cef1b1599f333ba9ff38d5ad7989275aa7e1c8386b8844ef8005b7c"} Jan 29 09:15:01 crc kubenswrapper[5017]: I0129 09:15:01.377342 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" event={"ID":"9278abb4-0058-4da4-b4a5-619b549021cb","Type":"ContainerStarted","Data":"64381458b4542659a51566eb9237609012e58d62053f4059dc3dadfe40ffcbd1"} Jan 29 09:15:01 crc kubenswrapper[5017]: I0129 09:15:01.407978 5017 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" podStartSLOduration=1.40793752 podStartE2EDuration="1.40793752s" podCreationTimestamp="2026-01-29 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:15:01.39980902 +0000 UTC m=+9587.774256650" watchObservedRunningTime="2026-01-29 09:15:01.40793752 +0000 UTC m=+9587.782385140" Jan 29 09:15:02 crc kubenswrapper[5017]: I0129 09:15:02.389891 5017 generic.go:334] "Generic (PLEG): container finished" podID="9278abb4-0058-4da4-b4a5-619b549021cb" containerID="3348f6dc5cef1b1599f333ba9ff38d5ad7989275aa7e1c8386b8844ef8005b7c" exitCode=0 Jan 29 09:15:02 crc kubenswrapper[5017]: I0129 09:15:02.390023 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" event={"ID":"9278abb4-0058-4da4-b4a5-619b549021cb","Type":"ContainerDied","Data":"3348f6dc5cef1b1599f333ba9ff38d5ad7989275aa7e1c8386b8844ef8005b7c"} Jan 29 09:15:03 crc kubenswrapper[5017]: I0129 09:15:03.827298 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:03 crc kubenswrapper[5017]: I0129 09:15:03.981695 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9278abb4-0058-4da4-b4a5-619b549021cb-secret-volume\") pod \"9278abb4-0058-4da4-b4a5-619b549021cb\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " Jan 29 09:15:03 crc kubenswrapper[5017]: I0129 09:15:03.981774 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9278abb4-0058-4da4-b4a5-619b549021cb-config-volume\") pod \"9278abb4-0058-4da4-b4a5-619b549021cb\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " Jan 29 09:15:03 crc kubenswrapper[5017]: I0129 09:15:03.981978 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md7tq\" (UniqueName: \"kubernetes.io/projected/9278abb4-0058-4da4-b4a5-619b549021cb-kube-api-access-md7tq\") pod \"9278abb4-0058-4da4-b4a5-619b549021cb\" (UID: \"9278abb4-0058-4da4-b4a5-619b549021cb\") " Jan 29 09:15:03 crc kubenswrapper[5017]: I0129 09:15:03.983293 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9278abb4-0058-4da4-b4a5-619b549021cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "9278abb4-0058-4da4-b4a5-619b549021cb" (UID: "9278abb4-0058-4da4-b4a5-619b549021cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:15:04 crc kubenswrapper[5017]: I0129 09:15:04.085154 5017 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9278abb4-0058-4da4-b4a5-619b549021cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:04 crc kubenswrapper[5017]: I0129 09:15:04.445021 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" Jan 29 09:15:04 crc kubenswrapper[5017]: I0129 09:15:04.444981 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-tns6w" event={"ID":"9278abb4-0058-4da4-b4a5-619b549021cb","Type":"ContainerDied","Data":"64381458b4542659a51566eb9237609012e58d62053f4059dc3dadfe40ffcbd1"} Jan 29 09:15:04 crc kubenswrapper[5017]: I0129 09:15:04.446019 5017 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64381458b4542659a51566eb9237609012e58d62053f4059dc3dadfe40ffcbd1" Jan 29 09:15:04 crc kubenswrapper[5017]: I0129 09:15:04.513194 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk"] Jan 29 09:15:04 crc kubenswrapper[5017]: I0129 09:15:04.536786 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-h5srk"] Jan 29 09:15:04 crc kubenswrapper[5017]: I0129 09:15:04.989528 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9278abb4-0058-4da4-b4a5-619b549021cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9278abb4-0058-4da4-b4a5-619b549021cb" (UID: "9278abb4-0058-4da4-b4a5-619b549021cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:15:04 crc kubenswrapper[5017]: I0129 09:15:04.996601 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9278abb4-0058-4da4-b4a5-619b549021cb-kube-api-access-md7tq" (OuterVolumeSpecName: "kube-api-access-md7tq") pod "9278abb4-0058-4da4-b4a5-619b549021cb" (UID: "9278abb4-0058-4da4-b4a5-619b549021cb"). InnerVolumeSpecName "kube-api-access-md7tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:15:05 crc kubenswrapper[5017]: I0129 09:15:05.016195 5017 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9278abb4-0058-4da4-b4a5-619b549021cb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:05 crc kubenswrapper[5017]: I0129 09:15:05.016238 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md7tq\" (UniqueName: \"kubernetes.io/projected/9278abb4-0058-4da4-b4a5-619b549021cb-kube-api-access-md7tq\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:06 crc kubenswrapper[5017]: I0129 09:15:06.328883 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181d8390-97e9-4232-ac5c-03b3e8b2a764" path="/var/lib/kubelet/pods/181d8390-97e9-4232-ac5c-03b3e8b2a764/volumes" Jan 29 09:15:06 crc kubenswrapper[5017]: I0129 09:15:06.835569 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-84bts_dced126a-1d49-4fe1-a610-32145372c814/prometheus-operator/0.log" Jan 29 09:15:06 crc kubenswrapper[5017]: I0129 09:15:06.947813 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f7867b468-8qncr_b5cd374b-6395-40e3-80fb-2ce7f3f9c001/prometheus-operator-admission-webhook/0.log" Jan 29 09:15:06 crc kubenswrapper[5017]: I0129 09:15:06.975110 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f7867b468-jcftm_3dc25106-c3d9-46c1-9d93-3407ca7dedbd/prometheus-operator-admission-webhook/0.log" Jan 29 09:15:07 crc kubenswrapper[5017]: I0129 09:15:07.152787 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-drf5r_c4ed50de-cf5f-4bc5-9e0c-8d696c49fe16/operator/0.log" Jan 29 09:15:07 crc kubenswrapper[5017]: I0129 09:15:07.295717 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-rvsnq_cecf03f2-56cf-41cd-a5e5-0a99d4c0784f/perses-operator/0.log" Jan 29 09:15:26 crc kubenswrapper[5017]: I0129 09:15:26.539154 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:15:26 crc kubenswrapper[5017]: I0129 09:15:26.540140 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:15:38 crc kubenswrapper[5017]: I0129 09:15:38.778543 5017 scope.go:117] "RemoveContainer" containerID="786814bc25be5c905f20935240824afd2409c1479e76ac5403a6dd1f0b789a61" Jan 29 09:15:56 crc kubenswrapper[5017]: I0129 09:15:56.539548 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:15:56 crc kubenswrapper[5017]: I0129 09:15:56.540418 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:16:26 crc kubenswrapper[5017]: I0129 09:16:26.539127 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:16:26 crc kubenswrapper[5017]: I0129 09:16:26.540108 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:16:26 crc kubenswrapper[5017]: I0129 09:16:26.540180 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 09:16:26 crc kubenswrapper[5017]: I0129 09:16:26.541394 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4618f8398420d06edec9c6f42cc7aa8b48fdc4be826626fe26241d4ca4840055"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:16:26 crc kubenswrapper[5017]: I0129 09:16:26.541450 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://4618f8398420d06edec9c6f42cc7aa8b48fdc4be826626fe26241d4ca4840055" gracePeriod=600 Jan 29 09:16:27 crc kubenswrapper[5017]: I0129 09:16:27.269678 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="4618f8398420d06edec9c6f42cc7aa8b48fdc4be826626fe26241d4ca4840055" exitCode=0 Jan 29 09:16:27 crc kubenswrapper[5017]: I0129 09:16:27.269715 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"4618f8398420d06edec9c6f42cc7aa8b48fdc4be826626fe26241d4ca4840055"} Jan 29 09:16:27 crc kubenswrapper[5017]: I0129 09:16:27.270206 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerStarted","Data":"af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85"} Jan 29 09:16:27 crc kubenswrapper[5017]: I0129 09:16:27.270227 5017 scope.go:117] "RemoveContainer" containerID="33375350f3d5c8c1202f1a8b9b184cb61601f45da7fb1771b787e221d6862d49" Jan 29 09:17:42 crc kubenswrapper[5017]: I0129 09:17:42.168749 5017 generic.go:334] "Generic (PLEG): container finished" podID="142105c7-f2f9-40d5-96ee-7b813dc6ec31" containerID="922e1ee4f13256414622f68534c86c35c22849516f165690b47fc12e0c7ecfb7" exitCode=0 Jan 29 09:17:42 crc kubenswrapper[5017]: I0129 09:17:42.168924 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wvpt7/must-gather-h46lk" event={"ID":"142105c7-f2f9-40d5-96ee-7b813dc6ec31","Type":"ContainerDied","Data":"922e1ee4f13256414622f68534c86c35c22849516f165690b47fc12e0c7ecfb7"} Jan 29 09:17:42 crc kubenswrapper[5017]: I0129 09:17:42.170790 5017 scope.go:117] "RemoveContainer" containerID="922e1ee4f13256414622f68534c86c35c22849516f165690b47fc12e0c7ecfb7" Jan 29 09:17:42 crc kubenswrapper[5017]: I0129 09:17:42.871757 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wvpt7_must-gather-h46lk_142105c7-f2f9-40d5-96ee-7b813dc6ec31/gather/0.log" Jan 29 09:17:51 crc kubenswrapper[5017]: I0129 09:17:51.890612 5017 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wvpt7/must-gather-h46lk"] Jan 29 09:17:51 crc kubenswrapper[5017]: I0129 09:17:51.891841 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wvpt7/must-gather-h46lk" podUID="142105c7-f2f9-40d5-96ee-7b813dc6ec31" containerName="copy" containerID="cri-o://1767e2d4d326c4170880d82bce18aef3143bcc0717a176f3d4611bdeaa51bb2e" gracePeriod=2 Jan 29 09:17:51 crc kubenswrapper[5017]: I0129 09:17:51.923395 5017 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wvpt7/must-gather-h46lk"] Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.281442 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wvpt7_must-gather-h46lk_142105c7-f2f9-40d5-96ee-7b813dc6ec31/copy/0.log" Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.282310 5017 generic.go:334] "Generic (PLEG): container finished" podID="142105c7-f2f9-40d5-96ee-7b813dc6ec31" containerID="1767e2d4d326c4170880d82bce18aef3143bcc0717a176f3d4611bdeaa51bb2e" exitCode=143 Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.532974 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wvpt7_must-gather-h46lk_142105c7-f2f9-40d5-96ee-7b813dc6ec31/copy/0.log" Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.534786 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.648728 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj9x8\" (UniqueName: \"kubernetes.io/projected/142105c7-f2f9-40d5-96ee-7b813dc6ec31-kube-api-access-tj9x8\") pod \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\" (UID: \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\") " Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.648915 5017 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/142105c7-f2f9-40d5-96ee-7b813dc6ec31-must-gather-output\") pod \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\" (UID: \"142105c7-f2f9-40d5-96ee-7b813dc6ec31\") " Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.656419 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142105c7-f2f9-40d5-96ee-7b813dc6ec31-kube-api-access-tj9x8" (OuterVolumeSpecName: "kube-api-access-tj9x8") pod "142105c7-f2f9-40d5-96ee-7b813dc6ec31" (UID: "142105c7-f2f9-40d5-96ee-7b813dc6ec31"). InnerVolumeSpecName "kube-api-access-tj9x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.751585 5017 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj9x8\" (UniqueName: \"kubernetes.io/projected/142105c7-f2f9-40d5-96ee-7b813dc6ec31-kube-api-access-tj9x8\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.869079 5017 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/142105c7-f2f9-40d5-96ee-7b813dc6ec31-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "142105c7-f2f9-40d5-96ee-7b813dc6ec31" (UID: "142105c7-f2f9-40d5-96ee-7b813dc6ec31"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:17:52 crc kubenswrapper[5017]: I0129 09:17:52.957141 5017 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/142105c7-f2f9-40d5-96ee-7b813dc6ec31-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 09:17:53 crc kubenswrapper[5017]: I0129 09:17:53.315799 5017 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wvpt7_must-gather-h46lk_142105c7-f2f9-40d5-96ee-7b813dc6ec31/copy/0.log" Jan 29 09:17:53 crc kubenswrapper[5017]: I0129 09:17:53.316746 5017 scope.go:117] "RemoveContainer" containerID="1767e2d4d326c4170880d82bce18aef3143bcc0717a176f3d4611bdeaa51bb2e" Jan 29 09:17:53 crc kubenswrapper[5017]: I0129 09:17:53.316831 5017 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wvpt7/must-gather-h46lk" Jan 29 09:17:53 crc kubenswrapper[5017]: I0129 09:17:53.347550 5017 scope.go:117] "RemoveContainer" containerID="922e1ee4f13256414622f68534c86c35c22849516f165690b47fc12e0c7ecfb7" Jan 29 09:17:54 crc kubenswrapper[5017]: I0129 09:17:54.327478 5017 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="142105c7-f2f9-40d5-96ee-7b813dc6ec31" path="/var/lib/kubelet/pods/142105c7-f2f9-40d5-96ee-7b813dc6ec31/volumes" Jan 29 09:18:26 crc kubenswrapper[5017]: I0129 09:18:26.539331 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:18:26 crc kubenswrapper[5017]: I0129 09:18:26.540337 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:18:56 crc kubenswrapper[5017]: I0129 09:18:56.539405 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:18:56 crc kubenswrapper[5017]: I0129 09:18:56.540397 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:19:26 crc kubenswrapper[5017]: I0129 09:19:26.539542 5017 patch_prober.go:28] interesting pod/machine-config-daemon-895pl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:19:26 crc kubenswrapper[5017]: I0129 09:19:26.540483 5017 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:19:26 crc kubenswrapper[5017]: I0129 09:19:26.540552 5017 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-895pl" Jan 29 09:19:26 crc kubenswrapper[5017]: I0129 09:19:26.541633 5017 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85"} pod="openshift-machine-config-operator/machine-config-daemon-895pl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:19:26 crc kubenswrapper[5017]: I0129 09:19:26.541690 5017 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerName="machine-config-daemon" containerID="cri-o://af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" gracePeriod=600 Jan 29 09:19:26 crc kubenswrapper[5017]: E0129 09:19:26.669540 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:19:27 crc kubenswrapper[5017]: I0129 09:19:27.338430 5017 generic.go:334] "Generic (PLEG): container finished" podID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" exitCode=0 Jan 29 09:19:27 crc kubenswrapper[5017]: I0129 09:19:27.339384 5017 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-895pl" event={"ID":"2672ef63-7861-4c3d-a1b4-03cc9d18f8e2","Type":"ContainerDied","Data":"af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85"} Jan 29 09:19:27 crc kubenswrapper[5017]: I0129 09:19:27.339504 5017 scope.go:117] "RemoveContainer" containerID="4618f8398420d06edec9c6f42cc7aa8b48fdc4be826626fe26241d4ca4840055" Jan 29 09:19:27 crc kubenswrapper[5017]: I0129 09:19:27.340533 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:19:27 crc kubenswrapper[5017]: E0129 09:19:27.340893 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:19:42 crc kubenswrapper[5017]: I0129 09:19:42.317764 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:19:42 crc kubenswrapper[5017]: E0129 09:19:42.319002 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:19:54 crc kubenswrapper[5017]: I0129 09:19:54.326497 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:19:54 crc kubenswrapper[5017]: E0129 09:19:54.328027 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:20:06 crc kubenswrapper[5017]: I0129 09:20:06.317351 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:20:06 crc kubenswrapper[5017]: E0129 09:20:06.318673 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:20:21 crc kubenswrapper[5017]: I0129 09:20:21.316832 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:20:21 crc kubenswrapper[5017]: E0129 09:20:21.318141 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:20:34 crc kubenswrapper[5017]: I0129 09:20:34.330341 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:20:34 crc kubenswrapper[5017]: E0129 09:20:34.331483 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:20:45 crc kubenswrapper[5017]: I0129 09:20:45.316615 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:20:45 crc kubenswrapper[5017]: E0129 09:20:45.318036 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:20:56 crc kubenswrapper[5017]: I0129 09:20:56.317004 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:20:56 crc kubenswrapper[5017]: E0129 09:20:56.318132 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:21:11 crc kubenswrapper[5017]: I0129 09:21:11.316719 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:21:11 crc kubenswrapper[5017]: E0129 09:21:11.317697 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:21:24 crc kubenswrapper[5017]: I0129 09:21:24.328512 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:21:24 crc kubenswrapper[5017]: E0129 09:21:24.329862 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:21:36 crc kubenswrapper[5017]: I0129 09:21:36.317081 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:21:36 crc kubenswrapper[5017]: E0129 09:21:36.318231 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2" Jan 29 09:21:50 crc kubenswrapper[5017]: I0129 09:21:50.317339 5017 scope.go:117] "RemoveContainer" containerID="af41d7fbda40f55e0c0a9284f7e81587c9db2a6183447bfd309fa7bba0c0eb85" Jan 29 09:21:50 crc kubenswrapper[5017]: E0129 09:21:50.323284 5017 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-895pl_openshift-machine-config-operator(2672ef63-7861-4c3d-a1b4-03cc9d18f8e2)\"" pod="openshift-machine-config-operator/machine-config-daemon-895pl" podUID="2672ef63-7861-4c3d-a1b4-03cc9d18f8e2"